This is the online appendix of the paper Rank-normalization, folding, and localization: A version of \(\widehat{R}\) that successfully assesses the convergence of iterative simulation algorithms available on Github (https://github.com/avehtari/rhat). Here, we provide all the compute code related to the examples presented in the paper and more numerical experiments not discussed in the paper itself.
To help you finding your way through all the examples presented in this online appendix, below please find a list of links to the examples discussed in the paper:
In this section, we will go through some examples to demonstrate the usefulness of our proposed methods as well as the associated workflow in determining convergence. Appendices B-E contain more detailed analysis of different algorithm variants and further examples.
First, we load all the necessary R packages and additional functions.
library(tidyverse)
library(gridExtra)
library(rstan)
options(mc.cores = parallel::detectCores())
rstan_options(auto_write = TRUE)
library(bayesplot)
theme_set(bayesplot::theme_default(base_family = "sans"))
library(rjags)
library(abind)
source('monitornew.R')
source('monitorplot.R')
This section relates to the examples presented in Section 5.1 of the paper.
The classic split-\(\widehat{R}\) is based on calculating within and between chain variances. If the marginal distribution of a chain is such that the variance is not defined (i.e., infinite), the classic split-\(\widehat{R}\) is not well justified. In this section, we will use the Cauchy distribution as an example of such a distribution. The following Cauchy models are from Michael Betancourt’s case study Fitting The Cauchy Distribution. Appendix C contains more detailed analysis of different algorithm variants and further Cauchy examples.
The nominal Cauchy model with direct parameterization is as follows:
writeLines(readLines("cauchy_nom.stan"))
parameters {
vector[50] x;
}
model {
x ~ cauchy(0, 1);
}
generated quantities {
real I = fabs(x[1]) < 1 ? 1 : 0;
}
Run the nominal model:
fit_nom <- stan(file = 'cauchy_nom.stan', seed = 7878, refresh = 0)
Warning: There were 1233 transitions after warmup that exceeded the maximum treedepth. Increase max_treedepth above 10. See
http://mc-stan.org/misc/warnings.html#maximum-treedepth-exceeded
Warning: Examine the pairs() plot to diagnose sampling problems
Treedepth exceedence and Bayesian fraction of missing information are dynamic HMC specific diagnostics (Betancourt, 2017). We get warnings about a very large number of transitions after warmup that exceeded the maximum treedepth, which is likely due to very long tails of the Cauchy distribution. All chains have low estimated Bayesian fraction of missing information also indicating slow mixing.
mon <- monitor(fit_nom)
print(mon)
Inference for the input samples (4 chains: each with iter = 2000; warmup = 1000):
Q5 Q50 Q95 Mean SD Rhat Bulk_ESS Tail_ESS
x[1] -5.74 -0.01 6.31 2.50 36.38 1.03 1181 393
x[2] -5.83 -0.01 6.07 0.65 16.22 1.01 2645 502
x[3] -5.23 0.01 5.73 0.58 17.72 1.01 2683 823
x[4] -6.25 -0.02 6.90 0.16 11.49 1.01 3627 644
x[5] -9.66 -0.05 5.11 -1.41 11.27 1.01 629 156
x[6] -5.26 0.00 5.41 0.20 5.32 1.01 3060 883
x[7] -6.35 0.06 10.77 4.14 32.56 1.02 607 184
x[8] -6.45 -0.01 5.37 -0.22 7.66 1.00 2658 886
x[9] -6.53 0.00 6.30 0.15 7.38 1.01 3128 901
x[10] -6.12 -0.01 5.92 0.06 5.74 1.01 2421 642
x[11] -6.73 0.00 6.15 0.03 9.68 1.01 2079 600
x[12] -5.68 -0.03 4.88 0.19 17.53 1.00 2633 774
x[13] -4.53 -0.06 4.27 0.09 6.28 1.00 3148 811
x[14] -4.88 0.00 5.03 -0.02 5.93 1.00 1461 492
x[15] -14.49 -0.01 11.51 -1.41 22.74 1.03 486 160
x[16] -7.03 0.01 6.96 0.16 15.91 1.01 2329 463
x[17] -6.59 0.01 7.69 0.96 30.18 1.01 2292 446
x[18] -4.51 0.05 6.92 1.10 11.89 1.01 2640 447
x[19] -7.66 -0.05 6.08 -3.14 28.14 1.01 1147 298
x[20] -5.78 0.03 11.33 5.78 36.99 1.03 363 80
x[21] -4.89 0.02 5.38 0.11 5.45 1.01 3276 824
x[22] -5.59 0.04 5.37 0.49 15.38 1.01 2121 522
x[23] -14.45 0.01 7.00 -3.10 32.02 1.01 391 89
x[24] -5.93 -0.04 5.47 -1.01 16.61 1.02 1434 284
x[25] -7.07 -0.02 5.94 -1.76 21.26 1.01 1544 324
x[26] -8.96 -0.06 5.85 -1.97 17.36 1.01 1778 452
x[27] -8.81 -0.01 8.34 0.15 18.08 1.00 1816 352
x[28] -5.26 0.02 5.93 0.11 9.51 1.02 3776 675
x[29] -5.88 0.00 5.90 -0.04 18.47 1.01 3642 846
x[30] -5.57 -0.02 5.14 -0.18 6.22 1.00 4363 643
x[31] -7.36 0.01 7.25 0.07 8.71 1.00 3384 896
x[32] -8.85 -0.04 6.50 -10.22 81.18 1.01 561 141
x[33] -4.79 0.03 4.96 -0.35 8.61 1.01 2626 735
x[34] -5.91 -0.03 5.37 -0.11 5.96 1.01 2408 634
x[35] -6.10 0.02 6.79 -0.33 14.65 1.01 2654 630
x[36] -4.86 0.10 15.60 19.30 108.42 1.04 155 34
x[37] -8.93 0.00 7.86 -0.09 13.30 1.01 1155 427
x[38] -5.54 -0.02 4.83 -0.33 5.95 1.00 3353 576
x[39] -5.89 -0.03 5.84 0.11 7.29 1.02 4493 724
x[40] -8.71 0.01 6.71 -1.98 19.81 1.00 877 148
x[41] -6.88 -0.03 7.55 -0.27 11.28 1.00 1726 512
x[42] -6.15 0.01 6.48 0.09 15.51 1.01 1519 448
x[43] -6.38 0.00 7.39 0.11 12.10 1.01 2746 483
x[44] -7.84 0.00 7.99 0.41 11.50 1.01 2999 659
x[45] -4.77 -0.02 4.66 0.04 6.73 1.01 2904 1014
x[46] -4.84 0.00 5.99 1.16 22.07 1.00 955 338
x[47] -8.51 0.03 24.26 0.80 37.14 1.07 231 56
x[48] -6.73 0.00 5.33 -0.72 10.35 1.01 1907 469
x[49] -6.27 0.04 6.92 0.73 12.44 1.00 1490 390
x[50] -5.21 0.00 4.65 -0.22 7.08 1.00 3109 680
I 0.00 0.50 1.00 0.50 0.50 1.00 390 4000
lp__ -92.71 -69.21 -49.03 -69.54 13.35 1.05 117 323
For each parameter, Bulk_ESS and Tail_ESS are crude measures of
effective sample size for bulk and tail quantities respectively (good values is
ESS > 400), and Rhat is the potential scale reduction factor on rank normalized
split chains (at convergence, Rhat = 1).
which_min_ess <- which.min(mon[1:50, 'Tail_ESS'])
Several Rhat > 1.01 and some ESS < 400 indicate also that the results should not be trusted. The Appendix C has more results with longer chains as well.
We can further analyze potential problems using local efficiency and rank plots. We specifically investigate x[36], which has the smallest tail-ESS of 34.
We examine the sampling efficiency in different parts of the posterior by computing the efficiency of small interval probability estimates (see Section Efficiency estimate for small interval probability estimates). Each interval contains \(1/k\) of the draws (e.g., \(5\%\) each, if \(k=20\)). The small interval efficiency measures mixing of an function which indicates when the values are inside or outside the specific small interval. As detailed above, this gives us a local efficiency measure which does not depend on the shape of the distribution.
plot_local_ess(fit = fit_nom, par = which_min_ess, nalpha = 20)
We see that the efficiency of our posterior draws is worryingly low in the tails (which is caused by slow mixing in long tails of Cauchy). Orange ticks show iterations that exceeded the maximum treedepth.
An alternative way to examine the efficiency in different parts of the posterior is to compute efficiencies estimates for quantiles (see Section Efficiency for quantiles). Each interval has a specified proportion of draws, and the efficiency measures mixing of a function which indicates when the values are smaller than or equal to the corresponding quantile.
plot_quantile_ess(fit = fit_nom, par = which_min_ess, nalpha = 40)
Similar as above, we see that the efficiency of our posterior draws is worryingly low in the tails. Again, orange ticks show iterations that exceeded the maximum treedepth.
We may also investigate how the estimated effective sample sizes change when we use more and more draws (Brooks and Gelman (1998) proposed to use similar graph for \(\widehat{R}\)). If the effective sample size is highly unstable, does not increase proportionally with more draws, or even decreases, this indicates that simply running longer chains will likely not solve the convergence issues. In the plot below, we see how unstable both bulk-ESS and tail-ESS are for this example.
plot_change_ess(fit = fit_nom, par = which_min_ess)
We can further analyze potential problems using rank plots in which we clearly see differences between chains.
samp <- as.array(fit_nom)
xmin <- paste0("x[", which_min_ess, "]")
mcmc_hist_r_scale(samp[, , xmin])
Next we examine an alternative parameterization that considers the Cauchy distribution as a scale mixture of Gaussian distributions. The model has two parameters and the Cauchy distributed \(x\)’s can be computed from those. In addition to improved sampling performance, the example illustrates that focusing on diagnostics matters.
writeLines(readLines("cauchy_alt_1.stan"))
parameters {
vector[50] x_a;
vector<lower=0>[50] x_b;
}
transformed parameters {
vector[50] x = x_a ./ sqrt(x_b);
}
model {
x_a ~ normal(0, 1);
x_b ~ gamma(0.5, 0.5);
}
generated quantities {
real I = fabs(x[1]) < 1 ? 1 : 0;
}
Run the alternative model:
fit_alt1 <- stan(file = 'cauchy_alt_1.stan', seed = 7878, refresh = 0)
There are no warnings, and the sampling is much faster.
mon <- monitor(fit_alt1)
print(mon)
Inference for the input samples (4 chains: each with iter = 2000; warmup = 1000):
Q5 Q50 Q95 Mean SD Rhat Bulk_ESS Tail_ESS
x_a[1] -1.65 -0.02 1.66 -0.01 0.99 1.00 4361 2961
x_a[2] -1.67 0.00 1.63 -0.01 1.01 1.00 4015 3167
x_a[3] -1.66 -0.03 1.62 -0.01 1.00 1.00 4190 3013
x_a[4] -1.63 -0.01 1.69 0.01 1.01 1.00 3626 2822
x_a[5] -1.64 -0.01 1.58 -0.01 0.98 1.00 3447 3029
x_a[6] -1.62 -0.01 1.56 -0.01 0.97 1.00 3975 2795
x_a[7] -1.61 0.00 1.65 0.01 1.01 1.00 3919 2271
x_a[8] -1.67 -0.03 1.60 -0.02 1.01 1.00 3736 3040
x_a[9] -1.63 -0.04 1.55 -0.03 0.99 1.00 3852 3075
x_a[10] -1.66 -0.03 1.74 0.00 1.02 1.00 3457 2810
x_a[11] -1.56 -0.02 1.55 -0.01 0.97 1.00 3532 2822
x_a[12] -1.64 0.00 1.69 0.01 1.00 1.00 3462 3036
x_a[13] -1.59 0.01 1.63 0.01 0.99 1.00 3479 2764
x_a[14] -1.72 -0.03 1.63 -0.03 1.01 1.00 3872 3179
x_a[15] -1.65 0.02 1.70 0.02 1.02 1.00 3843 2855
x_a[16] -1.67 -0.01 1.62 -0.01 1.01 1.00 3610 3047
x_a[17] -1.63 -0.01 1.66 -0.01 0.99 1.00 4363 3076
x_a[18] -1.65 -0.01 1.69 -0.01 1.01 1.00 4636 3088
x_a[19] -1.61 -0.02 1.59 -0.02 0.98 1.00 4055 3037
x_a[20] -1.64 -0.01 1.62 0.00 1.00 1.00 4340 2792
x_a[21] -1.67 0.04 1.68 0.02 1.03 1.00 3628 2719
x_a[22] -1.63 -0.01 1.64 -0.01 1.01 1.00 3976 2775
x_a[23] -1.62 0.03 1.70 0.02 1.01 1.00 4266 2682
x_a[24] -1.66 0.00 1.67 0.00 1.00 1.00 3710 2889
x_a[25] -1.64 0.02 1.66 0.00 1.00 1.00 4408 3062
x_a[26] -1.65 -0.02 1.61 0.00 1.00 1.00 3854 2674
x_a[27] -1.66 0.00 1.68 0.00 1.01 1.00 3331 2944
x_a[28] -1.69 0.02 1.68 0.02 1.01 1.00 4055 2689
x_a[29] -1.57 0.00 1.64 0.02 0.99 1.00 4097 3213
x_a[30] -1.61 0.00 1.64 0.01 0.99 1.00 3961 2900
x_a[31] -1.65 0.01 1.61 0.00 0.99 1.00 4097 3088
x_a[32] -1.59 -0.01 1.59 -0.01 0.97 1.00 4189 3171
x_a[33] -1.69 -0.01 1.77 0.00 1.05 1.00 3853 2594
x_a[34] -1.68 0.01 1.69 0.00 1.01 1.00 4012 2787
x_a[35] -1.61 0.03 1.67 0.03 1.00 1.00 3920 2829
x_a[36] -1.67 0.00 1.60 -0.01 0.99 1.00 4107 2797
x_a[37] -1.69 -0.03 1.59 -0.02 1.00 1.00 3956 2942
x_a[38] -1.67 -0.02 1.64 -0.02 1.02 1.00 4210 2919
x_a[39] -1.66 -0.02 1.65 -0.02 1.02 1.00 4482 2869
x_a[40] -1.63 0.00 1.62 0.00 1.00 1.00 4139 2848
x_a[41] -1.68 0.02 1.61 0.00 0.99 1.00 3963 2962
x_a[42] -1.64 0.03 1.64 0.02 1.00 1.00 4161 3077
x_a[43] -1.64 -0.03 1.63 -0.02 1.01 1.00 3808 2849
x_a[44] -1.66 -0.03 1.63 -0.02 1.00 1.00 3743 2865
x_a[45] -1.68 0.02 1.73 0.01 1.02 1.00 3783 2546
x_a[46] -1.56 0.04 1.66 0.05 0.97 1.00 4340 3277
x_a[47] -1.66 0.02 1.58 0.00 1.00 1.00 4283 2946
x_a[48] -1.61 0.00 1.66 0.00 0.98 1.00 4001 2779
x_a[49] -1.62 0.00 1.64 0.00 1.00 1.00 3906 3099
x_a[50] -1.62 -0.01 1.58 0.00 0.99 1.00 3794 2962
x_b[1] 0.00 0.44 4.00 1.00 1.44 1.00 2268 1264
x_b[2] 0.00 0.46 3.75 1.00 1.38 1.00 2444 1428
x_b[3] 0.00 0.47 3.89 1.03 1.46 1.00 3578 1950
x_b[4] 0.00 0.46 3.83 1.01 1.41 1.00 2693 1342
x_b[5] 0.00 0.45 3.95 1.03 1.46 1.00 3056 1731
x_b[6] 0.00 0.44 3.80 1.00 1.41 1.00 3264 1786
x_b[7] 0.01 0.43 3.62 0.95 1.32 1.00 2888 1907
x_b[8] 0.00 0.44 3.79 1.01 1.40 1.00 2876 1578
x_b[9] 0.00 0.50 3.67 1.00 1.36 1.00 2820 1535
x_b[10] 0.00 0.44 3.79 0.99 1.39 1.00 2534 1699
x_b[11] 0.01 0.49 3.90 1.03 1.44 1.00 3595 2032
x_b[12] 0.00 0.44 3.82 0.99 1.40 1.00 2405 1200
x_b[13] 0.00 0.46 3.97 1.03 1.45 1.00 2045 1073
x_b[14] 0.00 0.44 3.97 1.03 1.49 1.00 2829 1443
x_b[15] 0.01 0.45 3.77 1.01 1.41 1.00 2853 1447
x_b[16] 0.00 0.48 3.79 1.01 1.43 1.00 2661 1604
x_b[17] 0.01 0.46 3.93 1.02 1.42 1.00 2775 1477
x_b[18] 0.01 0.50 4.11 1.06 1.53 1.00 2689 1170
x_b[19] 0.00 0.45 3.92 1.00 1.41 1.00 2392 1450
x_b[20] 0.00 0.42 3.96 0.99 1.42 1.00 2296 1240
x_b[21] 0.01 0.49 3.94 1.04 1.43 1.00 3069 1848
x_b[22] 0.00 0.45 3.95 1.02 1.46 1.00 3012 1733
x_b[23] 0.00 0.46 3.95 1.00 1.42 1.00 1787 1093
x_b[24] 0.00 0.44 3.86 1.00 1.43 1.00 1903 1008
x_b[25] 0.00 0.45 3.66 0.98 1.39 1.00 2348 1094
x_b[26] 0.00 0.47 4.05 1.03 1.46 1.00 2421 1549
x_b[27] 0.00 0.45 3.90 1.01 1.41 1.00 2777 1470
x_b[28] 0.00 0.46 3.79 0.98 1.37 1.00 3353 1699
x_b[29] 0.01 0.46 3.87 1.01 1.43 1.00 3428 1997
x_b[30] 0.00 0.44 3.89 1.01 1.43 1.00 2833 1554
x_b[31] 0.00 0.49 3.84 1.01 1.41 1.00 3035 1633
x_b[32] 0.00 0.43 3.75 0.97 1.36 1.00 2276 1602
x_b[33] 0.00 0.45 3.79 1.00 1.41 1.00 3093 1888
x_b[34] 0.00 0.47 3.97 1.03 1.45 1.00 3309 1650
x_b[35] 0.00 0.48 3.84 1.02 1.42 1.00 2493 1588
x_b[36] 0.00 0.44 3.89 0.99 1.39 1.00 3108 1876
x_b[37] 0.00 0.46 3.70 0.98 1.35 1.00 2644 1322
x_b[38] 0.01 0.45 3.93 1.01 1.45 1.00 3155 1776
x_b[39] 0.00 0.45 3.75 0.99 1.42 1.00 2038 934
x_b[40] 0.00 0.42 3.76 0.94 1.31 1.00 2657 1403
x_b[41] 0.00 0.46 3.78 1.00 1.38 1.00 2648 1370
x_b[42] 0.00 0.45 3.89 1.00 1.43 1.00 2334 1365
x_b[43] 0.00 0.47 4.03 1.03 1.44 1.00 2967 1797
x_b[44] 0.00 0.43 3.69 0.97 1.37 1.00 2557 1591
x_b[45] 0.00 0.44 3.66 0.96 1.30 1.00 2731 1785
x_b[46] 0.00 0.46 3.74 1.01 1.40 1.00 2538 1183
x_b[47] 0.01 0.47 3.83 1.01 1.39 1.00 3948 2071
x_b[48] 0.01 0.48 3.89 1.02 1.39 1.00 3207 1917
x_b[49] 0.00 0.47 3.74 1.00 1.35 1.00 2550 1533
x_b[50] 0.00 0.46 4.01 1.00 1.48 1.00 2881 1395
x[1] -6.47 -0.02 6.52 0.01 34.88 1.01 3901 2122
x[2] -6.52 0.00 6.50 3.39 145.63 1.00 3767 1947
x[3] -6.31 -0.03 5.95 -0.08 16.17 1.00 3681 2514
x[4] -6.76 -0.01 5.86 -1.26 50.73 1.00 3244 2159
x[5] -6.67 -0.01 5.75 -0.31 30.29 1.00 3355 2430
x[6] -5.64 -0.01 6.48 -145.80 6461.97 1.00 3802 2536
x[7] -6.59 0.00 6.37 -0.58 22.65 1.00 3480 2508
x[8] -6.50 -0.04 6.29 -0.04 20.61 1.00 3474 2374
x[9] -5.73 -0.04 5.96 -0.92 55.64 1.00 3493 2262
x[10] -6.37 -0.04 6.43 -0.20 25.22 1.00 2871 2286
x[11] -5.61 -0.02 5.55 0.12 12.36 1.00 3423 2587
x[12] -7.12 0.00 6.19 0.41 91.83 1.00 3340 2329
x[13] -6.44 0.02 6.22 -5.65 201.97 1.00 3332 2164
x[14] -6.72 -0.04 6.56 -2.94 134.97 1.00 3693 2257
x[15] -5.68 0.02 6.06 -0.86 30.39 1.00 3511 1969
x[16] -6.99 -0.01 7.24 -0.40 24.59 1.00 3614 2406
x[17] -5.78 -0.01 5.39 -0.06 25.14 1.00 3880 2520
x[18] -5.45 -0.01 5.92 -0.55 259.42 1.00 4303 2254
x[19] -7.16 -0.02 5.84 9.85 549.14 1.00 3505 1931
x[20] -7.39 -0.01 6.68 -3.09 145.63 1.00 3411 1916
x[21] -5.93 0.05 5.85 0.23 45.62 1.00 3419 2179
x[22] -6.96 -0.02 6.23 -0.20 27.06 1.00 3554 2149
x[23] -5.99 0.05 6.54 2.51 114.59 1.00 3456 2098
x[24] -6.75 0.00 6.99 -0.67 111.43 1.00 3541 1922
x[25] -6.33 0.02 6.40 -2.71 100.25 1.00 4036 2356
x[26] -5.57 -0.02 6.32 0.20 14.47 1.00 3335 2319
x[27] -6.40 0.00 6.17 -0.78 52.60 1.00 3377 2400
x[28] -5.70 0.02 6.49 3.10 105.25 1.00 3446 2121
x[29] -5.40 0.00 5.67 -0.12 18.13 1.00 3918 2430
x[30] -5.83 -0.01 6.45 1.00 51.19 1.00 3642 2138
x[31] -5.85 0.02 5.66 0.86 26.31 1.00 3595 2271
x[32] -6.49 -0.01 6.18 0.11 50.69 1.00 4227 2731
x[33] -7.08 -0.01 6.34 -0.41 23.46 1.00 3807 2376
x[34] -6.59 0.01 6.05 0.95 48.19 1.00 4084 2358
x[35] -5.44 0.03 6.13 0.32 48.43 1.00 3756 2247
x[36] -6.15 0.00 6.11 -0.06 28.42 1.00 3600 2386
x[37] -6.08 -0.03 5.34 0.70 59.91 1.00 3623 2005
x[38] -5.74 -0.02 5.77 0.20 13.16 1.00 3820 2536
x[39] -6.73 -0.03 5.84 2.77 285.45 1.00 4121 1944
x[40] -6.39 0.01 6.52 -2.75 102.74 1.00 3612 1823
x[41] -6.01 0.02 5.92 -0.42 35.69 1.00 3492 2222
x[42] -7.39 0.05 6.86 0.49 22.47 1.00 3558 1949
x[43] -5.98 -0.03 6.69 14.36 626.20 1.00 3823 2516
x[44] -7.04 -0.04 6.17 1.55 106.26 1.00 3310 2239
x[45] -5.75 0.02 6.23 -0.43 32.28 1.00 3752 2437
x[46] -5.59 0.06 6.33 -0.32 76.00 1.00 3898 1976
x[47] -5.49 0.03 5.43 -0.02 12.08 1.00 3893 2659
x[48] -5.96 0.00 4.85 -0.01 21.02 1.00 3674 2274
x[49] -6.55 0.00 5.25 -1.24 129.15 1.00 3576 2243
x[50] -6.74 -0.01 6.90 -1.06 147.11 1.00 3437 2486
I 0.00 0.00 1.00 0.50 0.50 1.00 2648 4000
lp__ -95.19 -80.99 -68.66 -81.34 8.08 1.00 1310 1928
For each parameter, Bulk_ESS and Tail_ESS are crude measures of
effective sample size for bulk and tail quantities respectively (good values is
ESS > 400), and Rhat is the potential scale reduction factor on rank normalized
split chains (at convergence, Rhat = 1).
which_min_ess <- which.min(mon[101:150, 'Tail_ESS'])
All Rhat < 1.01 and ESS > 400 indicate the sampling worked much better with the alternative parameterization. Appendix C has more results using other alternative parameterizations. The x_a and x_b used to form the Cauchy distributed x have stable quantile, mean and sd values. As x is Cauchy distributed it has only stable quantiles, but wildly varying mean and sd estimates as the true values are not finite.
We can further analyze potential problems using local efficiency estimates and rank plots. We take a detailed look at x[40], which has the smallest bulk-ESS of 2848.
We examine the sampling efficiency in different parts of the posterior by computing the efficiency estimates for small interval probability estimates.
plot_local_ess(fit = fit_alt1, par = which_min_ess + 100, nalpha = 20)
The efficiency estimate is good in all parts of the posterior. Further, we examine the sampling efficiency of different quantile estimates.
plot_quantile_ess(fit = fit_alt1, par = which_min_ess + 100, nalpha = 40)
Rank plots also look rather similar across chains.
samp <- as.array(fit_alt1)
xmin <- paste0("x[", which_min_ess, "]")
mcmc_hist_r_scale(samp[, , xmin])
In summary, the alternative parameterization produces results that look much better than for the nominal parameterization. There are still some differences in the tails, which we also identified via the tail-ESS.
Half-Cauchy priors are common and, for example, in Stan usually set using the nominal parameterization. However, when the constraint <lower=0> is used, Stan does the sampling automatically in the unconstrained log(x) space, which changes the geometry crucially.
writeLines(readLines("half_cauchy_nom.stan"))
parameters {
vector<lower=0>[50] x;
}
model {
x ~ cauchy(0, 1);
}
generated quantities {
real I = fabs(x[1]) < 1 ? 1 : 0;
}
Run the half-Cauchy with nominal parameterization (and positive constraint):
fit_half_nom <- stan(file = 'half_cauchy_nom.stan', seed = 7878, refresh = 0)
There are no warnings, and the sampling is much faster than for the Cauchy nominal model.
mon <- monitor(fit_half_nom)
print(mon)
Inference for the input samples (4 chains: each with iter = 2000; warmup = 1000):
Q5 Q50 Q95 Mean SD Rhat Bulk_ESS Tail_ESS
x[1] 0.08 1.03 13.56 11.12 388.18 1 8077 2223
x[2] 0.10 1.02 10.79 11.64 482.36 1 9868 2612
x[3] 0.06 1.00 12.86 5.40 71.62 1 7895 2097
x[4] 0.09 0.98 11.50 4.41 29.48 1 7596 2347
x[5] 0.08 1.03 14.01 4.52 19.85 1 7495 2230
x[6] 0.08 0.99 11.72 12.52 379.98 1 7948 2145
x[7] 0.07 0.98 12.02 9.23 280.92 1 8336 2117
x[8] 0.09 0.99 10.82 4.61 40.83 1 8194 2165
x[9] 0.09 1.02 11.56 11.67 483.09 1 8464 2127
x[10] 0.07 1.03 14.90 6.01 50.98 1 7963 2399
x[11] 0.08 0.97 12.54 15.55 532.29 1 6980 1788
x[12] 0.07 1.03 13.33 4.10 17.47 1 8226 2029
x[13] 0.08 1.01 14.11 8.88 120.14 1 8077 2032
x[14] 0.07 0.96 13.48 6.91 71.52 1 7455 2261
x[15] 0.07 1.00 14.43 8.66 95.80 1 6766 2246
x[16] 0.08 0.99 14.06 5.55 38.45 1 7396 2174
x[17] 0.08 0.99 11.70 8.70 299.76 1 7144 2260
x[18] 0.09 0.98 11.56 14.42 397.59 1 7614 2035
x[19] 0.07 0.99 14.31 6.33 69.06 1 7823 1825
x[20] 0.09 1.00 11.21 6.40 160.21 1 7905 2355
x[21] 0.07 1.00 12.01 7.03 144.13 1 7843 2078
x[22] 0.09 1.02 12.69 32.18 1733.46 1 7735 2043
x[23] 0.07 0.98 13.46 6.33 67.34 1 7119 2142
x[24] 0.07 1.00 12.31 4.99 41.55 1 6893 1982
x[25] 0.09 1.00 11.53 8.60 270.33 1 7757 2351
x[26] 0.08 0.98 10.68 6.97 119.11 1 6433 2230
x[27] 0.08 1.01 13.37 4.88 34.30 1 7005 2119
x[28] 0.08 0.97 11.41 5.76 66.70 1 9631 2228
x[29] 0.07 1.00 13.68 10.37 239.91 1 6109 2312
x[30] 0.09 1.02 10.99 7.98 206.41 1 7958 2368
x[31] 0.08 0.96 12.42 4.39 32.13 1 6493 2102
x[32] 0.10 1.01 11.46 4.99 55.38 1 7043 1742
x[33] 0.08 0.99 13.76 5.45 36.80 1 6913 2455
x[34] 0.08 0.98 12.38 5.92 78.21 1 8610 2514
x[35] 0.07 0.96 13.57 5.70 57.45 1 6406 2160
x[36] 0.06 1.00 13.54 5.36 39.60 1 7694 2031
x[37] 0.07 1.00 13.87 8.29 195.61 1 7276 2491
x[38] 0.08 1.02 12.23 6.40 119.06 1 6790 2369
x[39] 0.10 1.01 11.69 6.67 88.51 1 7739 2518
x[40] 0.09 0.96 12.06 5.16 44.41 1 7087 2349
x[41] 0.07 0.96 12.87 4.97 42.30 1 8650 2333
x[42] 0.07 1.02 13.33 7.77 132.34 1 8703 2410
x[43] 0.08 0.97 10.25 26.62 1404.42 1 8747 2194
x[44] 0.08 1.03 12.29 5.96 59.29 1 6378 2257
x[45] 0.08 0.98 12.22 7.70 123.02 1 8430 2314
x[46] 0.08 0.99 12.06 5.36 72.19 1 8185 2237
x[47] 0.08 1.01 14.50 7.00 76.72 1 9562 2265
x[48] 0.08 1.01 13.05 5.06 30.44 1 8402 2690
x[49] 0.08 1.00 12.93 7.48 99.98 1 7993 1804
x[50] 0.08 1.00 13.22 8.86 178.68 1 7523 2243
I 0.00 0.00 1.00 0.49 0.50 1 7357 4000
lp__ -80.63 -69.06 -59.31 -69.32 6.42 1 1218 2001
For each parameter, Bulk_ESS and Tail_ESS are crude measures of
effective sample size for bulk and tail quantities respectively (good values is
ESS > 400), and Rhat is the potential scale reduction factor on rank normalized
split chains (at convergence, Rhat = 1).
All Rhat < 1.01 and ESS > 400 indicate good performance of the sampler. We see that the Stan’s automatic (and implicit) transformation of constraint parameters can have a big effect on the sampling performance. More experiments with different parameterizations of the half-Cauchy distribution can be found in Appendix C.
This section relates to the examples presented in Section 5.2 of the paper.
The Eight Schools data is a classic example for hierarchical models (see Section 5.5 in Gelman et al., 2013), which despite the apparent simplicity nicely illustrates the typical problems in inference for hierarchical models. The Stan models below are from Michael Betancourt’s case study on Diagnosing Biased Inference with Divergences. Appendix D contains more detailed analysis of different algorithm variants.
writeLines(readLines("eight_schools_cp.stan"))
data {
int<lower=0> J;
real y[J];
real<lower=0> sigma[J];
}
parameters {
real mu;
real<lower=0> tau;
real theta[J];
}
model {
mu ~ normal(0, 5);
tau ~ cauchy(0, 5);
theta ~ normal(mu, tau);
y ~ normal(theta, sigma);
}
We directly run the centered parameterization model with an increased adapt_delta value to reduce the probability of getting divergent transitions.
eight_schools <- read_rdump("eight_schools.data.R")
fit_cp <- stan(
file = 'eight_schools_cp.stan', data = eight_schools,
iter = 2000, chains = 4, seed = 483892929, refresh = 0,
control = list(adapt_delta = 0.95)
)
Warning: There were 113 divergent transitions after warmup. Increasing adapt_delta above 0.95 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
Warning: There were 2 chains where the estimated Bayesian Fraction of Missing Information was low. See
http://mc-stan.org/misc/warnings.html#bfmi-low
Warning: Examine the pairs() plot to diagnose sampling problems
Despite an increased adapt_delta, we still observe a lot of divergent transitions, which in itself is already sufficient indicator to not trust the results. We can use Rhat and ESS diagnostics to recognize problematic parts of the posterior and they could be used in cases when other MCMC algorithms than HMC is used.
mon <- monitor(fit_cp)
print(mon)
Inference for the input samples (4 chains: each with iter = 2000; warmup = 1000):
Q5 Q50 Q95 Mean SD Rhat Bulk_ESS Tail_ESS
mu -1.11 4.53 9.90 4.44 3.39 1.02 548 754
tau 0.39 2.85 9.61 3.62 3.10 1.07 67 82
theta[1] -2.24 5.81 16.29 6.23 5.74 1.02 747 1294
theta[2] -2.60 5.07 13.43 5.08 4.86 1.01 970 1240
theta[3] -5.01 4.35 12.11 3.94 5.33 1.01 899 1147
theta[4] -2.86 5.00 12.82 4.89 4.82 1.01 986 1059
theta[5] -4.74 4.03 10.83 3.66 4.81 1.01 715 988
theta[6] -4.15 4.28 11.60 4.08 4.84 1.01 833 976
theta[7] -1.30 5.97 15.59 6.31 5.18 1.02 612 1182
theta[8] -3.37 5.12 13.84 4.98 5.34 1.01 901 1477
lp__ -24.68 -15.04 0.37 -14.00 7.44 1.07 69 89
For each parameter, Bulk_ESS and Tail_ESS are crude measures of
effective sample size for bulk and tail quantities respectively (good values is
ESS > 400), and Rhat is the potential scale reduction factor on rank normalized
split chains (at convergence, Rhat = 1).
See Appendix D for results of longer chains.
Bulk-ESS and Tail-ESS for the between school standard deviation tau are 67 and 82 respectively. Both are less than 400, indicating we should investigate that parameter more carefully. We thus examine the sampling efficiency in different parts of the posterior by computing the efficiency estimate for small interval estimates for tau. These plots may either show quantiles or parameter values at the vertical axis. Red ticks show divergent transitions.
plot_local_ess(fit = fit_cp, par = "tau", nalpha = 20)
plot_local_ess(fit = fit_cp, par = "tau", nalpha = 20, rank = FALSE)
We see that the sampler has difficulties in exploring small tau values. As the sampling efficiency for estimating small tau values is practically zero, we may assume that we may miss substantial amount of posterior mass and get biased estimates. Red ticks, which show iterations with divergences, have concentrated to small tau values, indicate also problems exploring small values which is likely to cause bias.
We examine also the sampling efficiency of different quantile estimates. Again, these plots may either show quantiles or parameter values at the vertical axis.
plot_quantile_ess(fit = fit_cp, par = "tau", nalpha = 40)
plot_quantile_ess(fit = fit_cp, par = "tau", nalpha = 40, rank = FALSE)
Most of the quantile estimates have worryingly low effective sample size.
Let’s see how the estimated effective sample size changes when we use more and more draws. Here we don’t see sudden changes, but both bulk-ESS and tail-ESS are too low. See Appendix D for results of longer chains.
plot_change_ess(fit = fit_cp, par = "tau")
In lines with these findings, the rank plots of tau clearly show problems in the mixing of the chains.
samp_cp <- as.array(fit_cp)
mcmc_hist_r_scale(samp_cp[, , "tau"])
For hierarchical models, the non-centered parameterization often works better than the centered one:
writeLines(readLines("eight_schools_ncp.stan"))
data {
int<lower=0> J;
real y[J];
real<lower=0> sigma[J];
}
parameters {
real mu;
real<lower=0> tau;
real theta_tilde[J];
}
transformed parameters {
real theta[J];
for (j in 1:J)
theta[j] = mu + tau * theta_tilde[j];
}
model {
mu ~ normal(0, 5);
tau ~ cauchy(0, 5);
theta_tilde ~ normal(0, 1);
y ~ normal(theta, sigma);
}
For reasons of comparability, we also run the non-centered parameterization model with an increased adapt_delta value:
fit_ncp2 <- stan(
file = 'eight_schools_ncp.stan', data = eight_schools,
iter = 2000, chains = 4, seed = 483892929, refresh = 0,
control = list(adapt_delta = 0.95)
)
We get zero divergences and no other warnings which is a first good sign.
mon <- monitor(fit_ncp2)
print(mon)
Inference for the input samples (4 chains: each with iter = 2000; warmup = 1000):
Q5 Q50 Q95 Mean SD Rhat Bulk_ESS Tail_ESS
mu -1.14 4.42 9.96 4.47 3.37 1 5531 3004
tau 0.30 2.81 9.50 3.59 3.17 1 2872 1908
theta_tilde[1] -1.29 0.31 1.88 0.31 0.98 1 5046 2874
theta_tilde[2] -1.44 0.11 1.62 0.10 0.94 1 4177 2735
theta_tilde[3] -1.64 -0.08 1.48 -0.08 0.97 1 6485 2994
theta_tilde[4] -1.51 0.08 1.62 0.06 0.95 1 6076 2514
theta_tilde[5] -1.71 -0.17 1.39 -0.16 0.94 1 5608 3177
theta_tilde[6] -1.64 -0.06 1.53 -0.06 0.97 1 4855 2773
theta_tilde[7] -1.25 0.40 1.87 0.37 0.96 1 4796 2849
theta_tilde[8] -1.54 0.07 1.66 0.06 0.96 1 6142 2972
theta[1] -1.62 5.64 16.31 6.25 5.58 1 4907 3015
theta[2] -2.42 4.82 12.96 5.01 4.68 1 5122 3242
theta[3] -4.63 4.26 12.10 4.09 5.27 1 5457 3407
theta[4] -2.73 4.75 12.48 4.82 4.78 1 4695 3130
theta[5] -3.96 3.82 11.02 3.72 4.62 1 5346 3398
theta[6] -3.88 4.27 11.44 4.13 4.95 1 5670 3393
theta[7] -0.98 5.88 15.43 6.38 5.10 1 4708 3242
theta[8] -3.23 4.78 13.13 4.87 5.17 1 4924 3037
lp__ -11.19 -6.60 -3.78 -6.92 2.31 1 1641 2344
For each parameter, Bulk_ESS and Tail_ESS are crude measures of
effective sample size for bulk and tail quantities respectively (good values is
ESS > 400), and Rhat is the potential scale reduction factor on rank normalized
split chains (at convergence, Rhat = 1).
All Rhat < 1.01 and ESS > 400 indicate a much better performance of the non-centered parameterization.
We examine the sampling efficiency in different parts of the posterior by computing the effective sample size for small interval probability estimates for tau.
plot_local_ess(fit = fit_ncp2, par = 2, nalpha = 20)
Small tau values are still more difficult to explore, but the relative efficiency is in a good range. We may also check this with a finer resolution:
plot_local_ess(fit = fit_ncp2, par = 2, nalpha = 50)
The sampling efficiency for different quantile estimates looks good as well.
plot_quantile_ess(fit = fit_ncp2, par = 2, nalpha = 40)
In line with these findings, the rank plots of tau show no substantial differences between chains.
samp_ncp2 <- as.array(fit_ncp2)
mcmc_hist_r_scale(samp_ncp2[, , 2])
Betancourt, M. (2017) ‘A conceptual introduction to hamiltonian monte carlo’, arXiv preprint arXiv:1701.02434.
Brooks, S. P. and Gelman, A. (1998) ‘General methods for monitoring convergence of iterative simulations’, Journal of Computational and Graphical Statistics, 7(4), pp. 434–455.
Gelman, A. et al. (2013) Bayesian data analysis, third edition. CRC Press.
The following abbreviations are used throughout the appendices:
There are no examples or numerical experiments related to Appendix A.
This part focuses on diagnostics for
To simplify, in this part, independent draws are used as a proxy for very efficient MCMC sampling. First, we sample draws from a standard-normal distribution. We will discuss the behavior for non-normal distributions later. See Appendix A for the abbreviations used.
All chains are from the same Normal(0, 1) distribution plus a linear trend added to all chains:
conds <- expand.grid(
iters = c(250, 1000, 4000),
trend = c(0, 0.25, 0.5, 0.75, 1),
rep = 1:10
)
res <- vector("list", nrow(conds))
chains = 4
for (i in 1:nrow(conds)) {
iters <- conds[i, "iters"]
trend <- conds[i, "trend"]
rep <- conds[i, "rep"]
r <- array(rnorm(iters * chains), c(iters, chains))
r <- r + seq(-trend, trend, length.out = iters)
rs <- as.data.frame(monitor_extra(r))
res[[i]] <- cbind(iters, trend, rep, rs)
}
res <- bind_rows(res)
If we don’t split chains, Rhat misses the trends if all chains still have a similar marginal distribution.
ggplot(data = res, aes(y = Rhat, x = trend)) +
geom_point() +
geom_jitter() +
facet_grid(. ~ iters) +
geom_hline(yintercept = 1.005, linetype = 'dashed') +
geom_hline(yintercept = 1) +
ggtitle('Rhat without splitting chains')
Split-Rhat can detect trends, even if the marginals of the chains are similar.
ggplot(data = res, aes(y = zsRhat, x = trend)) +
geom_point() + geom_jitter() +
facet_grid(. ~ iters) +
geom_hline(yintercept = 1.005, linetype = 'dashed') +
geom_hline(yintercept = 1) +
ggtitle('Split-Rhat')
Result: Split-Rhat is useful for detecting non-stationarity (i.e., trends) in the chains. If we use a threshold of \(1.01\), we can detect trends which account for 2% or more of the total marginal variance. If we use a threshold of \(1.1\), we detect trends which account for 30% or more of the total marginal variance.
The effective sample size is based on split Rhat and within-chain autocorrelation. We plot the relative efficiency \(R_{\rm eff}=S_{\rm eff}/S\) for easier comparison between different values of \(S\). In the plot below, dashed lines indicate the threshold at which we would consider the effective sample size to be sufficient (i.e., \(S_{\rm eff} > 400\)). Since we plot the relative efficiency instead of the effective sample size itself, this threshold is divided by \(S\), which we compute here as the number of iterations per chain (variable iter) times the number of chains (\(4\)).
ggplot(data = res, aes(y = zsreff, x = trend)) +
geom_point() +
geom_jitter() +
facet_grid(. ~ iters) +
geom_hline(yintercept = c(0,1)) +
geom_hline(aes(yintercept = 400 / (4 * iters)), linetype = 'dashed') +
ggtitle('Relative Bulk-ESS (zsreff)') +
scale_y_continuous(breaks = seq(0, 1.5, by = 0.25))
Result: Split-Rhat is more sensitive to trends for small sample sizes, but effective sample size becomes more sensitive for larger samples sizes (as autocorrelations can be estimated more accurately).
Advice: If in doubt, run longer chains for more accurate convergence diagnostics.
Next we investigate the sensitivity to detect if one of the chains has not converged to the same distribution as the others, but has a different mean.
conds <- expand.grid(
iters = c(250, 1000, 4000),
shift = c(0, 0.25, 0.5, 0.75, 1),
rep = 1:10
)
res <- vector("list", nrow(conds))
chains = 4
for (i in 1:nrow(conds)) {
iters <- conds[i, "iters"]
shift <- conds[i, "shift"]
rep <- conds[i, "rep"]
r <- array(rnorm(iters * chains), c(iters, chains))
r[, 1] <- r[, 1] + shift
rs <- as.data.frame(monitor_extra(r))
res[[i]] <- cbind(iters, shift, rep, rs)
}
res <- bind_rows(res)
ggplot(data = res, aes(y = zsRhat, x = shift)) +
geom_point() +
geom_jitter() +
facet_grid(. ~ iters) +
geom_hline(yintercept = 1.005, linetype = 'dashed') +
geom_hline(yintercept = 1) +
ggtitle('Split-Rhat')
Result: If we use a threshold of \(1.01\), we can detect shifts with a magnitude of one third or more of the marginal standard deviation. If we use a threshold of \(1.1\), we detect shifts with a magnitude equal to or larger than the marginal standard deviation.
ggplot(data = res, aes(y = zsreff, x = shift)) +
geom_point() +
geom_jitter() +
facet_grid(. ~ iters) +
geom_hline(yintercept = c(0,1)) +
geom_hline(aes(yintercept = 400 / (4 * iters)), linetype = 'dashed') +
ggtitle('Relative Bulk-ESS (zsreff)') +
scale_y_continuous(breaks = seq(0, 1.5, by = 0.25))
Result: The effective sample size is not as sensitive, but a shift with a magnitude of half the marginal standard deviation or more will lead to very low relative efficiency when the total number of draws increases.
Rank plots can be used to visualize differences between chains. Here, we show rank plots for the case of 4 chains, 250 draws per chain, and a shift of 0.5.
iters = 250
chains = 4
shift = 0.5
r <- array(rnorm(iters * chains), c(iters, chains))
r[, 1] <- r[, 1] + shift
colnames(r) <- 1:4
mcmc_hist_r_scale(r)
Although, Rhat was less than \(1.05\) for this situation, the rank plots clearly show that the first chains behaves differently.
Next, we investigate the sensitivity to detect if one of the chains has not converged to the same distribution as the others, but has lower marginal variance.
conds <- expand.grid(
iters = c(250, 1000, 4000),
scale = c(0, 0.25, 0.5, 0.75, 1),
rep = 1:10
)
res <- vector("list", nrow(conds))
chains = 4
for (i in 1:nrow(conds)) {
iters <- conds[i, "iters"]
scale <- conds[i, "scale"]
rep <- conds[i, "rep"]
r <- array(rnorm(iters * chains), c(iters, chains))
r[, 1] <- r[, 1] * scale
rs <- as.data.frame(monitor_extra(r))
res[[i]] <- cbind(iters, scale, rep, rs)
}
res <- bind_rows(res)
We first look at the Rhat estimates:
ggplot(data = res, aes(y = zsRhat, x = scale)) +
geom_point() +
geom_jitter() +
facet_grid(. ~ iters) +
geom_hline(yintercept = 1.005, linetype = 'dashed') +
geom_hline(yintercept = 1) +
ggtitle('Split-Rhat')
Result: Split-Rhat is not able to detect scale differences between chains.
ggplot(data = res, aes(y = zfsRhat, x = scale)) +
geom_point() +
geom_jitter() +
facet_grid(. ~ iters) +
geom_hline(yintercept = 1.005, linetype = 'dashed') +
geom_hline(yintercept = 1) +
ggtitle('Folded-split-Rhat')
Result: Folded-Split-Rhat focuses on scales and detects scale differences.
Result: If we use a threshold of \(1.01\), we can detect a chain with scale less than \(3/4\) of the standard deviation of the others. If we use threshold of \(1.1\), we detect a chain with standard deviation less than \(1/4\) of the standard deviation of the others.
Next, we look at the effective sample size estimates:
ggplot(data = res, aes(y = zsreff, x = scale)) +
geom_point() +
geom_jitter() +
facet_grid(. ~ iters) +
geom_hline(yintercept = c(0,1)) +
geom_hline(aes(yintercept = 400 / (4 * iters)), linetype = 'dashed') +
ggtitle('Relative Bulk-ESS (zsreff)') +
scale_y_continuous(breaks = seq(0, 1.5, by = 0.25))
Result: The bulk effective sample size of the mean does not see a problem as it focuses on location differences between chains.
Rank plots can be used to visualize differences between chains. Here, we show rank plots for the case of 4 chains, 250 draws per chain, and with one chain having a standard deviation of 0.75 as opposed to a standard deviation of 1 for the other chains.
iters = 250
chains = 4
scale = 0.75
r <- array(rnorm(iters * chains), c(iters, chains))
r[, 1] <- r[, 1] * scale
colnames(r) <- 1:4
mcmc_hist_r_scale(r)
Although folded Rhat is \(1.06\), the rank plots clearly show that the first chains behaves differently.
The classic split-Rhat is based on calculating within and between chain variances. If the marginal distribution of a chain is such that the variance is not defined (i.e. infinite), the classic split-Rhat is not well justified. In this section, we will use the Cauchy distribution as an example of such distribution. Also in cases where mean and variance are finite, the distribution can be far from Gaussian. Especially distributions with very long tails cause instability for variance and autocorrelation estimates. To alleviate these problems we will use Split-Rhat for rank-normalized draws.
The following Cauchy models are from Michael Betancourt’s case study Fitting The Cauchy Distribution
We already looked at the nominal Cauchy model with direct parameterization in the main text, but for completeness, we take a closer look using different variants of the diagnostics.
writeLines(readLines("cauchy_nom.stan"))
parameters {
vector[50] x;
}
model {
x ~ cauchy(0, 1);
}
generated quantities {
real I = fabs(x[1]) < 1 ? 1 : 0;
}
Run the nominal model:
fit_nom <- stan(file = 'cauchy_nom.stan', seed = 7878, refresh = 0)
Warning: There were 1233 transitions after warmup that exceeded the maximum treedepth. Increase max_treedepth above 10. See
http://mc-stan.org/misc/warnings.html#maximum-treedepth-exceeded
Warning: Examine the pairs() plot to diagnose sampling problems
Treedepth exceedence and Bayesian Fraction of Missing Information are dynamic HMC specific diagnostics (Betancourt, 2017). We get warnings about very large number of transitions after warmup that exceeded the maximum treedepth, which is likely due to very long tails of the Cauchy distribution. All chains have low estimated Bayesian fraction of missing information also indicating slow mixing.
Trace plots for the first parameter look wild with occasional large values:
samp <- as.array(fit_nom)
mcmc_trace(samp[, , 1])
Let’s check Rhat and ESS diagnostics.
res <- monitor_extra(samp[, , 1:50])
which_min_ess <- which.min(res$tailseff)
plot_rhat(res)
For one parameter, Rhats exceed the classic threshold of 1.1. Depending on the Rhat estimate, a few others also exceed the threshold of 1.01. The rank normalized split-Rhat has several values over 1.01. Please note that the classic split-Rhat is not well defined in this example, because mean and variance of the Cauchy distribution are not finite.
plot_ess(res)
Both classic and new effective sample size estimates have several very small values, and so the overall sample shouldn’t be trusted.
Result: Effective sample size is more sensitive than (rank-normalized) split-Rhat to detect problems of slow mixing.
We also check the log posterior value lp__ and find out that the effective sample size is worryingly low.
res <- monitor_extra(samp[, , 51:52])
cat('lp__: Bulk-ESS = ', round(res['lp__', 'zsseff'], 2), '\n')
lp__: Bulk-ESS = 117
cat('lp__: Tail-ESS = ', round(res['lp__', 'tailseff'], 2), '\n')
lp__: Tail-ESS = 323
We can further analyze potential problems using local effective sample size and rank plots. We examine x[36], which has the smallest tail-ESS of 117.
We examine the sampling efficiency in different parts of the posterior by computing the effective sample size for small interval probability estimates (see Section Efficiency for small interval probability estimates). Each interval contains \(1/k\) of the draws (e.g., with \(k=20\)). The small interval efficiency measures mixing of an indicator function which indicates when the values are inside the specific small interval. This gives us a local efficiency measure which does not depend on the shape of the distribution.
plot_local_ess(fit = fit_nom, par = which_min_ess, nalpha = 20)
We see that the efficiency is worryingly low in the tails (which is caused by slow mixing in long tails of Cauchy). Orange ticks show draws that exceeded the maximum treedepth.
An alternative way to examine the effective sample size in different parts of the posterior is to compute effective sample size for quantiles (see Section Efficiency for quantiles). Each interval has a specified proportion of draws, and the efficiency measures mixing of an indicator function’s results which indicate when the values are inside the specific interval.
plot_quantile_ess(fit = fit_nom, par = which_min_ess, nalpha = 40)
We see that the efficiency is worryingly low in the tails (which is caused by slow mixing in long tails of Cauchy). Orange ticks show draws that exceeded the maximum treedepth.
We can further analyze potential problems using rank plots, from which we clearly see differences between chains.
xmin <- paste0("x[", which_min_ess, "]")
mcmc_hist_r_scale(samp[, , xmin])
We can try to improve the performance by increasing max_treedepth to \(20\):
fit_nom_td20 <- stan(
file = 'cauchy_nom.stan', seed = 7878,
refresh = 0, control = list(max_treedepth = 20)
)
Trace plots for the first parameter still look wild with occasional large values.
samp <- as.array(fit_nom_td20)
mcmc_trace(samp[, , 1])
res <- monitor_extra(samp[, , 1:50])
which_min_ess <- which.min(res$tailseff)
We check the diagnostics for all \(x\).
plot_rhat(res)
All Rhats are below \(1.1\), but many are over \(1.01\). Classic split-Rhat has more variation than the rank normalized Rhat (note that the former is not well defined). The folded rank normalized Rhat shows that there is still more variation in the scale than in the location between different chains.
plot_ess(res)
Some classic effective sample sizes are very small. If we wouldn’t realize that the variance is infinite, we might try to run longer chains, but in case of an infinite variance, zero relative efficiency (ESS/S) is the truth and longer chains won’t help with that. However other quantities can be well defined, and that’s why it is useful to also look at the rank normalized version as a generic transformation to achieve finite mean and variance. The smallest bulk-ESS is less than 1000, which is not that bad. The smallest median-ESS is larger than 2500, that is we are able to estimate the median efficiently. However, many tail-ESS’s are less than 400 indicating problems for estimating the scale of the posterior.
Result: The rank normalized effective sample size is more stable than classic effective sample size, which is not well defined for the Cauchy distribution.
Result: It is useful to look at both bulk- and tail-ESS.
We check also lp__. Although increasing max_treedepth improved bulk-ESS of x, the efficiency for lp__ didn’t change.
res <- monitor_extra(samp[, , 51:52])
cat('lp__: Bulk-ESS =', round(res['lp__', 'zsseff'], 2), '\n')
lp__: Bulk-ESS = 240
cat('lp__: Tail-ESS =', round(res['lp__', 'tailseff'], 2), '\n')
lp__: Tail-ESS = 587
We examine the sampling efficiency in different parts of the posterior by computing the effective sample size for small interval probability estimates.
plot_local_ess(fit = fit_nom_td20, par = which_min_ess, nalpha = 20)
It seems that increasing max_treedepth has not much improved the efficiency in the tails. We also examine the effective sample size of different quantile estimates.
plot_quantile_ess(fit = fit_nom_td20, par = which_min_ess, nalpha = 40)
The rank plot visualisation of x[11], which has the smallest tail-ESS of NaN among the \(x\), indicates clear convergence problems.
xmin <- paste0("x[", which_min_ess, "]")
mcmc_hist_r_scale(samp[, , xmin])
The rank plot visualisation of lp__, which has an effective sample size 240, doesn’t look so good either.
mcmc_hist_r_scale(samp[, , "lp__"])
Let’s try running 8 times longer chains.
fit_nom_td20l <- stan(
file = 'cauchy_nom.stan', seed = 7878,
refresh = 0, control = list(max_treedepth = 20),
warmup = 1000, iter = 9000
)
Warning: There were 7 transitions after warmup that exceeded the maximum treedepth. Increase max_treedepth above 20. See
http://mc-stan.org/misc/warnings.html#maximum-treedepth-exceeded
Warning: Examine the pairs() plot to diagnose sampling problems
Trace plots for the first parameter still look wild with occasional large values.
samp <- as.array(fit_nom_td20l)
mcmc_trace(samp[, , 1])
res <- monitor_extra(samp[, , 1:50])
which_min_ess <- which.min(res$tailseff)
Let’s check the diagnostics for all \(x\).
plot_rhat(res)
All Rhats are below \(1.01\). The classic split-Rhat has more variation than the rank normalized Rhat (note that the former is not well defined in this case).
plot_ess(res)
Most classic ESS’s are close to zero. Running longer chains just made most classic ESS’s even smaller.
The smallest bulk-ESS are around 5000, which is not that bad. The smallest median-ESS’s are larger than 25000, that is we are able to estimate the median efficiently. However, the smallest tail-ESS is 919 indicating problems for estimating the scale of the posterior.
Result: The rank normalized effective sample size is more stable than classic effective sample size even for very long chains.
Result: It is useful to look at both bulk- and tail-ESS.
We also check lp__. Although increasing the number of iterations improved bulk-ESS of the \(x\), the relative efficiency for lp__ didn’t change.
res <- monitor_extra(samp[, , 51:52])
cat('lp__: Bulk-ESS =', round(res['lp__', 'zsseff'], 2), '\n')
lp__: Bulk-ESS = 1289
cat('lp__: Tail-ESS =', round(res['lp__', 'tailseff'], 2), '\n')
lp__: Tail-ESS = 1887
We examine the sampling efficiency in different parts of the posterior by computing the effective sample size for small interval probability estimates.
plot_local_ess(fit = fit_nom_td20l, par = which_min_ess, nalpha = 20)
Increasing the chain length did not seem to change the relative efficiency. With more draws from the longer chains we can use a finer resolution for the local efficiency estimates.
plot_local_ess(fit = fit_nom_td20l, par = which_min_ess, nalpha = 100)
The sampling efficiency far in the tails is worryingly low. This was more difficult to see previously with less draws from the tails. We see similar problems in the plot of effective sample size for quantiles.
plot_quantile_ess(fit = fit_nom_td20l, par = which_min_ess, nalpha = 100)
Let’s look at the rank plot visualisation of x[39], which has the smallest tail-ESS NaN among the \(x\).
xmin <- paste0("x[", which_min_ess, "]")
mcmc_hist_r_scale(samp[, , xmin])
Increasing the number of iterations couldn’t remove the mixing problems at the tails. The mixing problem is inherent to the nominal parameterization of Cauchy distribution.
Next, we examine an alternative parameterization and consider the Cauchy distribution as a scale mixture of Gaussian distributions. The model has two parameters and the Cauchy distributed \(x\) can be computed from those. In addition to improved sampling performance, the example illustrates that focusing on diagnostics matters.
writeLines(readLines("cauchy_alt_1.stan"))
parameters {
vector[50] x_a;
vector<lower=0>[50] x_b;
}
transformed parameters {
vector[50] x = x_a ./ sqrt(x_b);
}
model {
x_a ~ normal(0, 1);
x_b ~ gamma(0.5, 0.5);
}
generated quantities {
real I = fabs(x[1]) < 1 ? 1 : 0;
}
We run the alternative model:
fit_alt1 <- stan(file='cauchy_alt_1.stan', seed=7878, refresh = 0)
There are no warnings and the sampling is much faster.
samp <- as.array(fit_alt1)
res <- monitor_extra(samp[, , 101:150])
which_min_ess <- which.min(res$tailseff)
plot_rhat(res)
All Rhats are below \(1.01\). Classic split-Rhats also look good even though they are not well defined for the Cauchy distribution.
plot_ess(res)
Result: Rank normalized ESS’s have less variation than classic one which is not well defined for Cauchy.
We check lp__:
res <- monitor_extra(samp[, , 151:152])
cat('lp__: Bulk-ESS =', round(res['lp__', 'zsseff'], 2), '\n')
lp__: Bulk-ESS = 1310
cat('lp__: Tail-ESS =', round(res['lp__', 'tailseff'], 2), '\n')
lp__: Tail-ESS = 1928
The relative efficiencies for lp__ are also much better than with the nominal parameterization.
We examine the sampling efficiency in different parts of the posterior by computing the effective sample size for small interval probability estimates.
plot_local_ess(fit = fit_alt1, par = 100 + which_min_ess, nalpha = 20)
The effective sample size is good in all parts of the posterior. We also examine the effective sample size of different quantile estimates.
plot_quantile_ess(fit = fit_alt1, par = 100 + which_min_ess, nalpha = 40)
We compare the mean relative efficiencies of the underlying parameters in the new parameterization and the actual \(x\) we are interested in.
res <- monitor_extra(samp[, , 101:150])
res1 <- monitor_extra(samp[, , 1:50])
res2 <- monitor_extra(samp[, , 51:100])
cat('Mean Bulk-ESS for x =' , round(mean(res[, 'zsseff']), 2), '\n')
Mean Bulk-ESS for x = 3629.24
cat('Mean Tail-ESS for x =' , round(mean(res[, 'tailseff']), 2), '\n')
Mean Tail-ESS for x = 2265.22
cat('Mean Bulk-ESS for x_a =' , round(mean(res1[, 'zsseff']), 2), '\n')
Mean Bulk-ESS for x_a = 3956.06
cat('Mean Bulk-ESS for x_b =' , round(mean(res2[, 'zsseff']), 2), '\n')
Mean Bulk-ESS for x_b = 2761.22
Result: We see that the effective sample size of the interesting \(x\) can be different from the effective sample size of the parameters \(x_a\) and \(x_b\) that we used to compute it.
The rank plot visualisation of x[40], which has the smallest tail-ESS of 1823 among the \(x\) looks better than for the nominal parameterization.
xmin <- paste0("x[", which_min_ess, "]")
mcmc_hist_r_scale(samp[, , xmin])
Similarly, the rank plot visualisation of lp__, which has a relative efficiency of -81.34, 0.23, 8.08, -95.19, -80.99, -68.66, 1288, 0.32, 1303, 1296, 1310, 0.33, 1, 1, 1, 1, 1, 2366, 0.59, 1928, 0.48, 1708, 0.43, 2912, 0.73 looks better than for the nominal parameterization.
mcmc_hist_r_scale(samp[, , "lp__"])
Another alternative parameterization is obtained by a univariate transformation as shown in the following code (see also the 3rd alternative in Michael Betancourt’s case study).
writeLines(readLines("cauchy_alt_3.stan"))
parameters {
vector<lower=0, upper=1>[50] x_tilde;
}
transformed parameters {
vector[50] x = tan(pi() * (x_tilde - 0.5));
}
model {
// Implicit uniform prior on x_tilde
}
generated quantities {
real I = fabs(x[1]) < 1 ? 1 : 0;
}
We run the alternative model:
fit_alt3 <- stan(file='cauchy_alt_3.stan', seed=7878, refresh = 0)
There are no warnings, and the sampling is much faster than for the nominal model.
samp <- as.array(fit_alt3)
res <- monitor_extra(samp[, , 51:100])
which_min_ess <- which.min(res$tailseff)
plot_rhat(res)
All Rhats except some folded Rhats are below 1.01. Classic split-Rhat’s look also good even though it is not well defined for the Cauchy distribution.
plot_ess(res)
Result: Rank normalized relative efficiencies have less variation than classic ones. Bulk-ESS and median-ESS are slightly larger than 1, which is possible for antithetic Markov chains which have negative correlation for odd lags.
We also take a closer look at the lp__ value:
res <- monitor_extra(samp[, , 101:102])
cat('lp__: Bulk-ESS =', round(res['lp__', 'zsseff'], 2), '\n')
lp__: Bulk-ESS = 1494
cat('lp__: Tail-ESS =', round(res['lp__', 'tailseff'], 2), '\n')
lp__: Tail-ESS = 1884
The effective sample size for these are also much better than with the nominal parameterization.
We examine the sampling efficiency in different parts of the posterior by computing the effective sample size for small interval probability estimates.
plot_local_ess(fit = fit_alt3, par = 50 + which_min_ess, nalpha = 20)
We examine also the sampling efficiency of different quantile estimates.
plot_quantile_ess(fit = fit_alt3, par = 50 + which_min_ess, nalpha = 40)
The effective sample size in tails is worse than for the first alternative parameterization, although it’s still better than for the nominal parameterization.
We compare the mean effective sample size of the underlying parameter in the new parameterization and the actually Cauchy distributed \(x\) we are interested in.
res <- monitor_extra(samp[, , 51:100])
res1 <- monitor_extra(samp[, , 1:50])
cat('Mean bulk-seff for x =' , round(mean(res[, 'zsseff']), 2), '\n')
Mean bulk-seff for x = 4702.98
cat('Mean tail-seff for x =' , round(mean(res[, 'zfsseff']), 2), '\n')
Mean tail-seff for x = 1602.7
cat('Mean bulk-seff for x_tilde =' , round(mean(res1[, 'zsseff']), 2), '\n')
Mean bulk-seff for x_tilde = 4702.98
cat('Mean tail-seff for x_tilde =' , round(mean(res1[, 'zfsseff']), 2), '\n')
Mean tail-seff for x_tilde = 1612.14
The Rank plot visualisation of x[5], which has the smallest tail-ESS of 1891 among the \(x\) reveals shows good efficiency, similar to the results for lp__.
xmin <- paste0("x[", which_min_ess, "]")
mcmc_hist_r_scale(samp[, , xmin])
mcmc_hist_r_scale(samp[, , "lp__"])
Half-Cauchy priors are common and, for example, in Stan usually set using the nominal parameterization. However, when the constraint <lower=0> is used, Stan does the sampling automatically in the unconstrained log(x) space, which changes the geometry crucially.
writeLines(readLines("half_cauchy_nom.stan"))
parameters {
vector<lower=0>[50] x;
}
model {
x ~ cauchy(0, 1);
}
generated quantities {
real I = fabs(x[1]) < 1 ? 1 : 0;
}
We run the half-Cauchy model with nominal parameterization (and positive constraint).
fit_half_nom <- stan(file = 'half_cauchy_nom.stan', seed = 7878, refresh = 0)
There are no warnings and the sampling is much faster than for the full Cauchy distribution with nominal parameterization.
samp <- as.array(fit_half_nom)
res <- monitor_extra(samp[, , 1:50])
which_min_ess <- which.min(res$tailseff)
plot_rhat(res)
All Rhats are below \(1.01\). Classic split-Rhats also look good even though they are not well defined for the half-Cauchy distribution.
plot_ess(res)
Result: Rank normalized effective sample size have less variation than classic ones. Some Bulk-ESS and median-ESS are larger than 1, which is possible for antithetic Markov chains which have negative correlation for odd lags.
Due to the <lower=0> constraint, the sampling was made in the log(x) space, and we can also check the performance in that space.
res <- monitor_extra(log(samp[, , 1:50]))
plot_ess(res)
\(\log(x)\) is quite close to Gaussian, and thus classic effective sample size is also close to rank normalized ESS which is exactly the same as for the original \(x\) as rank normalization is invariant to bijective transformations.
Result: The rank normalized effective sample size is close to the classic effective sample size for transformations which make the distribution close to Gaussian.
We examine the sampling efficiency in different parts of the posterior by computing the effective sample size for small interval probability estimates.
plot_local_ess(fit = fit_half_nom, par = which_min_ess, nalpha = 20)
The effective sample size is good overall, with only a small dip in tails. We can also examine the effective sample size of different quantile estimates.
plot_quantile_ess(fit = fit_half_nom, par = which_min_ess, nalpha = 40)
The rank plot visualisation of x[32], which has the smallest tail-ESS of 1742 among \(x\), looks good.
xmin <- paste0("x[", which_min_ess, "]")
mcmc_hist_r_scale(samp[, , xmin])
The rank plot visualisation of lp__ reveals some small differences in the scales, but it’s difficult to know whether this small variation from uniform is relevant.
mcmc_hist_r_scale(samp[, , "lp__"])
writeLines(readLines("half_cauchy_alt.stan"))
parameters {
vector<lower=0>[50] x_a;
vector<lower=0>[50] x_b;
}
transformed parameters {
vector[50] x = x_a .* sqrt(x_b);
}
model {
x_a ~ normal(0, 1);
x_b ~ inv_gamma(0.5, 0.5);
}
generated quantities {
real I = fabs(x[1]) < 1 ? 1 : 0;
}
Run half-Cauchy with alternative parameterization
fit_half_reparam <- stan(
file = 'half_cauchy_alt.stan', seed = 7878, refresh = 0
)
There are no warnings and the sampling is as fast for the half-Cauchy nominal model.
samp <- as.array(fit_half_reparam)
res <- monitor_extra(samp[, , 101:150])
which_min_ess <- which.min(res$tailseff)
plot_rhat(res)
plot_ess(res)
Result: The Rank normalized relative efficiencies have less variation than classic ones which is not well defined for the Cauchy distribution. Based on bulk-ESS and median-ESS, the efficiency for central quantities is much lower, but based on tail-ESS and MAD-ESS, the efficiency in the tails is slightly better than for the half-Cauchy distribution with nominal parameterization. We also see that a parameterization which is good for the full Cauchy distribution is not necessarily good for the half-Cauchy distribution as the <lower=0> constraint additionally changes the parameterization.
We also check the lp__ values:
res <- monitor_extra(samp[, , 151:152])
cat('lp__: Bulk-ESS =', round(res['lp__', 'zsseff'], 2), '\n')
lp__: Bulk-ESS = 977
cat('lp__: Tail-ESS =', round(res['lp__', 'tailseff'], 2), '\n')
lp__: Tail-ESS = 1750
We examine the sampling efficiency in different parts of the posterior by computing the effective sample size for small interval probability estimates.
plot_local_ess(fit_half_reparam, par = 100 + which_min_ess, nalpha = 20)
We also examine the effective sample size for different quantile estimates.
plot_quantile_ess(fit_half_reparam, par = 100 + which_min_ess, nalpha = 40)
The effective sample size near zero is much worse than for the half-Cauchy distribution with nominal parameterization.
The Rank plot visualisation of x[20], which has the smallest tail-ESS of NaN among the \(x\), reveals deviations from uniformity, which is expected with lower effective sample size.
xmin <- paste0("x[", which_min_ess, "]")
mcmc_hist_r_scale(samp[, , xmin])
A similar result is obtained when looking at the rank plots of lp__.
mcmc_hist_r_scale(samp[, , "lp__"])
So far, we have run all models in Stan, but we want to also investigate whether similar problems arise with probabilistic programming languages that use other samplers than variants of Hamiltonian Monte-Carlo. Thus, we will fit the eight schools models also with Jags, which uses a dialect of the BUGS language to specify models. Jags uses a clever mix of Gibbs and Metropolis-Hastings sampling. This kind of sampling does not scale well to high dimensional posteriors of strongly interdependent parameters, but for the relatively simple models discussed in this case study it should work just fine.
The Jags code for the nominal parameteriztion of the cauchy distribution looks as follows:
writeLines(readLines("cauchy_nom.bugs"))
model {
for (i in 1:50) {
x[i] ~ dt(0, 1, 1)
}
}
First, we initialize the Jags model for reusage later.
jags_nom <- jags.model(
"cauchy_nom.bugs",
n.chains = 4, n.adapt = 10000
)
Compiling model graph
Resolving undeclared variables
Allocating nodes
Graph information:
Observed stochastic nodes: 0
Unobserved stochastic nodes: 50
Total graph size: 52
Initializing model
Next, we sample 1000 iterations for each of the 4 chains for easy comparison with the corresponding Stan results.
samp_jags_nom <- coda.samples(
jags_nom, variable.names = "x",
n.iter = 1000
)
samp_jags_nom <- aperm(abind(samp_jags_nom, along = 3), c(1, 3, 2))
dimnames(samp_jags_nom)[[2]] <- paste0("chain:", 1:4)
We summarize the model as follows:
mon <- monitor(samp_jags_nom)
print(mon)
Inference for the input samples (4 chains: each with iter = 1000; warmup = 0):
Q5 Q50 Q95 Mean SD Rhat Bulk_ESS Tail_ESS
x[1] -6.78 -0.03 5.15 -0.52 25.81 1 3630 3646
x[2] -6.51 -0.02 6.01 2.03 117.36 1 3945 3428
x[3] -6.26 0.02 7.13 4.51 219.74 1 4107 3648
x[4] -6.64 -0.01 6.77 0.20 116.86 1 3747 3811
x[5] -7.06 -0.01 5.53 -1.80 95.73 1 3865 3931
x[6] -6.00 0.01 6.07 -1.71 90.06 1 4001 3871
x[7] -6.06 -0.04 5.81 -3.79 202.24 1 4118 3624
x[8] -6.26 0.00 6.14 1.06 39.71 1 4059 3842
x[9] -6.12 -0.02 6.68 0.53 33.52 1 3942 3656
x[10] -6.62 -0.01 5.80 0.75 47.98 1 3884 3973
x[11] -5.86 0.03 6.31 1.68 44.16 1 3931 3950
x[12] -6.81 0.03 6.13 -72.04 4583.23 1 4227 3922
x[13] -7.07 0.01 6.44 -0.98 59.15 1 4178 3849
x[14] -6.46 -0.01 6.39 -1.58 59.92 1 3672 3715
x[15] -6.88 -0.03 6.07 0.60 39.47 1 3987 3690
x[16] -6.36 -0.03 5.55 53.16 4042.14 1 4394 4011
x[17] -5.75 0.03 6.22 6.32 343.12 1 3782 3644
x[18] -6.01 -0.01 6.37 0.48 21.41 1 3925 3929
x[19] -6.26 0.00 5.90 -0.48 36.69 1 3864 3917
x[20] -5.65 0.01 6.34 -0.29 44.63 1 3979 3873
x[21] -6.00 0.03 7.19 -1.41 51.81 1 3936 3971
x[22] -7.09 -0.05 6.45 -0.21 122.62 1 3860 3864
x[23] -6.13 -0.02 6.10 1.95 111.14 1 3974 3517
x[24] -6.18 0.01 7.15 0.77 31.57 1 4139 4057
x[25] -6.28 -0.01 6.52 4.27 155.08 1 4139 3745
x[26] -5.97 0.04 6.94 0.41 57.85 1 4155 3739
x[27] -6.40 -0.06 6.33 -0.26 128.37 1 4014 4102
x[28] -6.24 -0.03 7.17 0.91 38.20 1 3675 3947
x[29] -6.09 -0.04 6.55 2.44 99.93 1 4072 3933
x[30] -6.22 0.00 6.19 0.15 49.14 1 4075 3795
x[31] -6.58 -0.02 6.53 -0.84 72.67 1 4049 3709
x[32] -6.08 -0.01 5.44 -1.03 111.00 1 4035 3931
x[33] -6.63 0.01 6.62 0.70 39.90 1 4032 4054
x[34] -6.39 0.07 6.30 2.14 144.04 1 4083 3961
x[35] -6.20 0.01 6.82 1.23 93.79 1 3948 3587
x[36] -5.80 -0.02 6.49 0.61 195.74 1 3891 4013
x[37] -5.66 0.00 6.54 0.05 17.74 1 3929 4056
x[38] -6.21 0.03 6.91 -1.26 102.00 1 3775 4102
x[39] -6.00 0.01 5.96 0.10 120.09 1 4120 3919
x[40] -5.81 0.06 6.84 -0.09 31.51 1 4036 3832
x[41] -6.49 -0.02 6.50 1.55 80.36 1 3906 3571
x[42] -5.98 0.00 6.29 -0.53 87.00 1 3996 3744
x[43] -6.29 -0.01 6.05 -5.64 337.54 1 3986 3891
x[44] -6.36 0.04 5.85 -7.68 451.64 1 3657 3514
x[45] -6.17 0.00 6.42 0.06 18.66 1 4136 4010
x[46] -6.60 0.00 5.52 -2.07 98.26 1 3680 3971
x[47] -6.38 0.03 6.31 -2.68 104.61 1 3813 3810
x[48] -6.09 -0.06 5.46 -1.09 57.56 1 3939 4144
x[49] -4.96 -0.03 6.32 0.30 48.92 1 4018 3592
x[50] -5.84 0.04 5.59 -0.67 57.92 1 3785 3851
For each parameter, Bulk_ESS and Tail_ESS are crude measures of
effective sample size for bulk and tail quantities respectively (good values is
ESS > 400), and Rhat is the potential scale reduction factor on rank normalized
split chains (at convergence, Rhat = 1).
which_min_ess <- which.min(mon[1:50, 'Bulk_ESS'])
The overall results look very promising with Rhats = 1 and ESS values close to the total number of draws of 4000. We take a detailed look at x[1], which has the smallest bulk-ESS of 3630.
We examine the sampling efficiency in different parts of the posterior by computing the efficiency estimates for small interval probability estimates.
plot_local_ess(fit = samp_jags_nom, par = which_min_ess, nalpha = 20)
The efficiency estimate is good in all parts of the posterior. Further, we examine the sampling efficiency of different quantile estimates.
plot_quantile_ess(fit = samp_jags_nom, par = which_min_ess, nalpha = 40)
Rank plots also look rather similar across chains.
xmin <- paste0("x[", which_min_ess, "]")
mcmc_hist_r_scale(samp_jags_nom[, , xmin])
Result: Jags seems to be able to sample from the nominal parameterization of the Cauchy distribution just fine.
We continue with our discussion about hierarchical models on the Eight Schools data, which we started in Section Eight Schools. We also analyse the performance of different variants of the diagnostics.
writeLines(readLines("eight_schools_cp.stan"))
data {
int<lower=0> J;
real y[J];
real<lower=0> sigma[J];
}
parameters {
real mu;
real<lower=0> tau;
real theta[J];
}
model {
mu ~ normal(0, 5);
tau ~ cauchy(0, 5);
theta ~ normal(mu, tau);
y ~ normal(theta, sigma);
}
In the main text, we observed that the centered parameterization of this hierarchical model did not work well with the default MCMC options of Stan plus increased adapt_delta, and so we directly try to fit the model with longer chains.
Low efficiency can be sometimes compensated with longer chains. Let’s check 10 times longer chain.
fit_cp2 <- stan(
file = 'eight_schools_cp.stan', data = eight_schools,
iter = 20000, chains = 4, seed = 483892929, refresh = 0,
control = list(adapt_delta = 0.95)
)
Warning: There were 2335 divergent transitions after warmup. Increasing adapt_delta above 0.95 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
Warning: There were 1 chains where the estimated Bayesian Fraction of Missing Information was low. See
http://mc-stan.org/misc/warnings.html#bfmi-low
Warning: Examine the pairs() plot to diagnose sampling problems
monitor(fit_cp2)
Inference for the input samples (4 chains: each with iter = 20000; warmup = 10000):
Q5 Q50 Q95 Mean SD Rhat Bulk_ESS Tail_ESS
mu -0.99 4.84 10.34 4.88 3.57 1.05 71 189
tau 0.33 2.81 10.02 3.67 3.31 1.08 45 17
theta[1] -1.36 6.43 16.32 6.76 5.64 1.01 407 9491
theta[2] -2.48 5.47 12.55 5.42 4.80 1.02 153 9429
theta[3] -4.92 4.66 11.52 4.41 5.43 1.03 117 10374
theta[4] -2.89 5.34 12.40 5.23 4.97 1.03 140 9670
theta[5] -4.48 4.32 10.69 4.09 4.94 1.04 89 4758
theta[6] -4.15 4.70 11.27 4.49 5.08 1.03 118 11277
theta[7] -0.88 6.60 15.53 6.83 5.11 1.01 449 11102
theta[8] -3.46 5.41 13.34 5.34 5.49 1.02 172 10408
lp__ -24.94 -14.78 0.22 -13.84 7.59 1.07 50 86
For each parameter, Bulk_ESS and Tail_ESS are crude measures of
effective sample size for bulk and tail quantities respectively (good values is
ESS > 400), and Rhat is the potential scale reduction factor on rank normalized
split chains (at convergence, Rhat = 1).
res <- monitor_extra(fit_cp2)
print(res)
Inference for the input samples (4 chains: each with iter = 20000; warmup = 10000):
mean se_mean sd Q5 Q50 Q95 seff reff sseff zseff zsseff zsreff Rhat sRhat
mu 4.88 0.49 3.57 -0.99 4.84 10.34 53 0.00 71 54 71 0.00 1.05 1.05
tau 3.67 0.30 3.31 0.33 2.81 10.02 123 0.00 173 35 45 0.00 1.02 1.02
theta[1] 6.76 0.22 5.64 -1.36 6.43 16.32 666 0.02 1057 281 407 0.01 1.01 1.01
theta[2] 5.42 0.43 4.80 -2.48 5.47 12.55 124 0.00 169 113 153 0.00 1.02 1.02
theta[3] 4.41 0.53 5.43 -4.92 4.66 11.52 105 0.00 146 86 117 0.00 1.03 1.02
theta[4] 5.23 0.46 4.97 -2.89 5.34 12.40 118 0.00 163 105 140 0.00 1.02 1.02
theta[5] 4.09 0.57 4.94 -4.48 4.32 10.69 76 0.00 102 69 89 0.00 1.03 1.03
theta[6] 4.49 0.51 5.08 -4.15 4.70 11.27 100 0.00 137 87 118 0.00 1.03 1.03
theta[7] 6.83 0.23 5.11 -0.88 6.60 15.53 512 0.01 745 309 449 0.01 1.01 1.01
theta[8] 5.34 0.43 5.49 -3.46 5.41 13.34 162 0.00 231 125 172 0.00 1.02 1.02
lp__ -13.84 1.32 7.59 -24.94 -14.78 0.22 33 0.00 44 37 50 0.00 1.08 1.08
zRhat zsRhat zfsRhat zfsseff zfsreff tailseff tailreff medsseff medsreff madsseff madsreff
mu 1.05 1.05 1.02 152 0.00 189 0.00 174 0 174 0.00
tau 1.08 1.08 1.01 1035 0.03 17 0.00 175 0 167 0.00
theta[1] 1.01 1.01 1.00 3297 0.08 9491 0.24 177 0 268 0.01
theta[2] 1.02 1.02 1.00 1817 0.05 9429 0.24 173 0 177 0.00
theta[3] 1.03 1.03 1.01 736 0.02 10374 0.26 168 0 172 0.00
theta[4] 1.03 1.03 1.01 1468 0.04 9670 0.24 167 0 167 0.00
theta[5] 1.04 1.04 1.01 375 0.01 4758 0.12 170 0 178 0.00
theta[6] 1.03 1.03 1.01 644 0.02 11277 0.28 176 0 176 0.00
theta[7] 1.01 1.01 1.00 2761 0.07 11102 0.28 179 0 852 0.02
theta[8] 1.02 1.02 1.00 2816 0.07 10408 0.26 166 0 191 0.00
lp__ 1.07 1.07 1.06 55 0.00 86 0.00 170 0 157 0.00
We still get a whole bunch of divergent transitions so it’s clear that the results can’t be trusted even if all other diagnostics were good. Still, it may be worth looking at additional diagnostics to better understand what’s happening.
Some rank-normalized split-Rhats are still larger than \(1.01\). Bulk-ESS for tau and lp__ are around 800 which corresponds to low relative efficiency of \(1\%\), but is above our recommendation of ESS>400. In this kind of cases, it is useful to look at the local efficiency estimates, too (and the larger number of divergences is clear indication of problems, too).
We examine the sampling efficiency in different parts of the posterior by computing the effective sample size for small intervals for tau.
plot_local_ess(fit = fit_cp2, par = "tau", nalpha = 50)
We see that the sampling has difficulties in exploring small tau values. As ESS<400 for small probability intervals in case of small tau values, we may suspect that we may miss substantial amount of posterior mass and get biased estimates.
We also examine the effective sample size of different quantile estimates.
plot_quantile_ess(fit = fit_cp2, par = "tau", nalpha = 100)
Several quantile estimates have ESS<400, which raises a doubt that there are convergence problems and we may have significant bias.
Let’s see how the Bulk-ESS and Tail-ESS changes when we use more and more draws.
plot_change_ess(fit = fit_cp2, par = "tau")
We see that given recommendation that Bulk-ESS>400 and Tail-ESS>400, they are not sufficient to detect convergence problems in this case, even the tail quantile estimates are able to detect these problems.
The rank plot visualisation of tau also shows clear sticking and mixing problems.
samp_cp2 <- as.array(fit_cp2)
mcmc_hist_r_scale(samp_cp2[, , "tau"])
Similar results are obtained for lp__, which is closely connected to tau for this model.
mcmc_hist_r_scale(samp_cp2[, , "lp__"])
We may also examine small interval efficiencies for mu.
plot_local_ess(fit = fit_cp2, par = "mu", nalpha = 50)
There are gaps of poor efficiency which again indicates problems in the mixing of the chains. However, these problems do not occur for any specific range of values of mu as was the case for tau. This tells us that it’s probably not mu with which the sampler has problems, but more likely tau or a related quantity.
As we observed divergences, we shouldn’t trust any Monte Carlo standard error (MCSE) estimates as they are likely biased, as well. However, for illustration purposes, we compute the MCSE, tail quantiles and corresponding effective sample sizes for the median of mu and tau. Comparing to the shorter MCMC run, using 10 times more draws has not reduced the MCSE to one third as would be expected without problems in the mixing of the chains.
round(quantile_mcse(samp_cp2[ , , "mu"], prob = 0.5), 2)
mcse Q05 Q95 Seff
1 0.37 4.22 5.43 173.52
round(quantile_mcse(samp_cp2[ , , "tau"], prob = 0.5), 2)
mcse Q05 Q95 Seff
1 0.27 2.38 3.27 174.86
For further evidence, let’s check 100 times longer chains than the default. This is not something we would recommend doing in practice, as it is not able to solve any problems with divergences as illustrated below.
fit_cp3 <- stan(
file = 'eight_schools_cp.stan', data = eight_schools,
iter = 200000, chains = 4, seed = 483892929, refresh = 0,
control = list(adapt_delta = 0.95)
)
Warning: There were 11699 divergent transitions after warmup. Increasing adapt_delta above 0.95 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
Warning: There were 3 chains where the estimated Bayesian Fraction of Missing Information was low. See
http://mc-stan.org/misc/warnings.html#bfmi-low
Warning: Examine the pairs() plot to diagnose sampling problems
monitor(fit_cp3)
Inference for the input samples (4 chains: each with iter = 2e+05; warmup = 1e+05):
Q5 Q50 Q95 Mean SD Rhat Bulk_ESS Tail_ESS
mu -1.10 4.37 9.83 4.37 3.33 1 18335 30265
tau 0.47 2.94 10.04 3.80 3.21 1 2200 769
theta[1] -1.59 5.73 16.38 6.29 5.69 1 23832 110854
theta[2] -2.53 4.85 12.76 4.94 4.76 1 27789 136002
theta[3] -5.06 4.09 11.88 3.87 5.36 1 39355 122761
theta[4] -2.95 4.68 12.64 4.75 4.86 1 32607 138545
theta[5] -4.55 3.79 10.77 3.55 4.72 1 34479 44492
theta[6] -4.16 4.16 11.62 4.01 4.91 1 37000 92227
theta[7] -1.03 5.92 15.64 6.38 5.16 1 20685 58049
theta[8] -3.49 4.74 13.51 4.85 5.39 1 36212 125498
lp__ -24.98 -15.15 -2.08 -14.58 6.87 1 2541 1074
For each parameter, Bulk_ESS and Tail_ESS are crude measures of
effective sample size for bulk and tail quantities respectively (good values is
ESS > 400), and Rhat is the potential scale reduction factor on rank normalized
split chains (at convergence, Rhat = 1).
res <- monitor_extra(fit_cp3)
print(res)
Inference for the input samples (4 chains: each with iter = 2e+05; warmup = 1e+05):
mean se_mean sd Q5 Q50 Q95 seff reff sseff zseff zsseff zsreff Rhat sRhat
mu 4.37 0.02 3.33 -1.10 4.37 9.83 18435 0.05 18411 18358 18335 0.05 1 1
tau 3.80 0.03 3.21 0.47 2.94 10.04 9355 0.02 9391 2206 2200 0.01 1 1
theta[1] 6.29 0.03 5.69 -1.59 5.73 16.38 30955 0.08 31120 23846 23832 0.06 1 1
theta[2] 4.94 0.03 4.76 -2.53 4.85 12.76 32922 0.08 33013 27715 27789 0.07 1 1
theta[3] 3.87 0.02 5.36 -5.06 4.09 11.88 53609 0.13 53632 39181 39355 0.10 1 1
theta[4] 4.75 0.02 4.86 -2.95 4.68 12.64 39472 0.10 39723 32535 32607 0.08 1 1
theta[5] 3.55 0.02 4.72 -4.55 3.79 10.77 41358 0.10 41819 34256 34479 0.09 1 1
theta[6] 4.01 0.02 4.91 -4.16 4.16 11.62 45877 0.11 46105 36785 37000 0.09 1 1
theta[7] 6.38 0.03 5.16 -1.03 5.92 15.64 24701 0.06 24691 20682 20685 0.05 1 1
theta[8] 4.85 0.02 5.39 -3.49 4.74 13.51 50166 0.13 50212 36407 36212 0.09 1 1
lp__ -14.58 0.14 6.87 -24.98 -15.15 -2.08 2414 0.01 2410 2545 2541 0.01 1 1
zRhat zsRhat zfsRhat zfsseff zfsreff tailseff tailreff medsseff medsreff madsseff madsreff
mu 1 1 1 28848 0.07 30265 0.08 16309 0.04 18276 0.05
tau 1 1 1 32860 0.08 769 0.00 10904 0.03 15361 0.04
theta[1] 1 1 1 40586 0.10 110854 0.28 15333 0.04 18846 0.05
theta[2] 1 1 1 41788 0.10 136002 0.34 15135 0.04 18463 0.05
theta[3] 1 1 1 31386 0.08 122761 0.31 16680 0.04 22054 0.06
theta[4] 1 1 1 38705 0.10 138545 0.35 16525 0.04 19038 0.05
theta[5] 1 1 1 37235 0.09 44492 0.11 16593 0.04 18887 0.05
theta[6] 1 1 1 36568 0.09 92227 0.23 16393 0.04 18180 0.05
theta[7] 1 1 1 35129 0.09 58049 0.15 14827 0.04 17370 0.04
theta[8] 1 1 1 38924 0.10 125498 0.31 15431 0.04 16092 0.04
lp__ 1 1 1 2942 0.01 1074 0.00 10102 0.03 13775 0.03
Rhat, Bulk-ESS and Tail-ESS are not able to detect problems, although Tail-ESS for tau is suspiciously low compared to total number of draws.
plot_local_ess(fit = fit_cp3, par = "tau", nalpha = 100)
plot_quantile_ess(fit = fit_cp3, par = "tau", nalpha = 100)
And the rank plots of tau also show sticking and mixing problems for small values of tau.
samp_cp3 <- as.array(fit_cp3)
mcmc_hist_r_scale(samp_cp3[, , "tau"])
What we do see is an advantage of rank plots over trace plots as even with 100000 draws per chain, rank plots don’t get crowded and the mixing problems of chains is still easy to see.
With centered parameterization the mean estimate tends to get smaller with more draws. With 400000 draws using the centered parameterization the mean estimate is 3.77 (se 0.03). With 40000 draws using the non-centered parameterization the mean estimate is 3.6 (se 0.02). The difference is more than 8 sigmas. We are able to see the convergence problems in the centered parameterization case, if we do look carefully (or use divergence diagnostic ), but we do see that Rhat, Bulk-ESS, Tail-ESS and Monte Carlo error estimates for the mean can’t be trusted if other diagnostics indicate convergence problems!
When autocorrelation time is high, it has been common to thin the chains by saving only a small portion of the draws. This will throw away useful information also for convergence diagnostics. With 400000 iterations per chain, thinning of 200 and 4 chains, we again end up with 4000 iterations as with the default settings.
fit_cp4 <- stan(
file = 'eight_schools_cp.stan', data = eight_schools,
iter = 400000, thin = 200, chains = 4, seed = 483892929, refresh = 0,
control = list(adapt_delta = 0.95)
)
Warning: There were 93 divergent transitions after warmup. Increasing adapt_delta above 0.95 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
Warning: There were 3 chains where the estimated Bayesian Fraction of Missing Information was low. See
http://mc-stan.org/misc/warnings.html#bfmi-low
Warning: Examine the pairs() plot to diagnose sampling problems
We observe several divergent transitions and the estimated Bayesian fraction of missing information is also low, which indicate convergence problems and potentially biased estimates.
Unfortunately the thinning makes Rhat and ESS estimates to miss the problems. The posterior mean is still biased, being more than 3 sigmas away from the estimate obtained using non-centered parameterization.
monitor(fit_cp4)
Inference for the input samples (4 chains: each with iter = 4e+05; warmup = 2e+05):
Q5 Q50 Q95 Mean SD Rhat Bulk_ESS Tail_ESS
mu -0.91 4.46 9.73 4.40 3.24 1 3784 3648
tau 0.46 2.89 10.03 3.75 3.16 1 3625 2447
theta[1] -1.66 5.63 16.16 6.24 5.74 1 4101 3691
theta[2] -2.17 4.84 12.61 5.04 4.62 1 3950 3946
theta[3] -4.54 4.16 11.88 3.98 5.21 1 4121 3819
theta[4] -3.02 4.73 12.42 4.75 4.83 1 4026 4188
theta[5] -4.38 3.75 10.56 3.55 4.68 1 3790 3839
theta[6] -3.76 4.30 11.81 4.18 4.86 1 4057 4059
theta[7] -0.96 5.91 15.40 6.34 5.00 1 4154 3813
theta[8] -3.54 4.64 13.48 4.78 5.33 1 4040 3968
lp__ -25.06 -15.02 -1.64 -14.42 6.99 1 3689 2616
For each parameter, Bulk_ESS and Tail_ESS are crude measures of
effective sample size for bulk and tail quantities respectively (good values is
ESS > 400), and Rhat is the potential scale reduction factor on rank normalized
split chains (at convergence, Rhat = 1).
res <- monitor_extra(fit_cp4)
print(res)
Inference for the input samples (4 chains: each with iter = 4e+05; warmup = 2e+05):
mean se_mean sd Q5 Q50 Q95 seff reff sseff zseff zsseff zsreff Rhat sRhat
mu 4.40 0.05 3.24 -0.91 4.46 9.73 3737 0.93 3779 3741 3784 0.95 1 1
tau 3.75 0.05 3.16 0.46 2.89 10.03 4006 1.00 4005 3625 3625 0.91 1 1
theta[1] 6.24 0.09 5.74 -1.66 5.63 16.16 4064 1.02 4071 4096 4101 1.03 1 1
theta[2] 5.04 0.07 4.62 -2.17 4.84 12.61 3924 0.98 3935 3939 3950 0.99 1 1
theta[3] 3.98 0.08 5.21 -4.54 4.16 11.88 4097 1.02 4104 4115 4121 1.03 1 1
theta[4] 4.75 0.08 4.83 -3.02 4.73 12.42 3961 0.99 4006 3971 4026 1.01 1 1
theta[5] 3.55 0.08 4.68 -4.38 3.75 10.56 3720 0.93 3810 3742 3790 0.95 1 1
theta[6] 4.18 0.08 4.86 -3.76 4.30 11.81 3945 0.99 4028 4010 4057 1.01 1 1
theta[7] 6.34 0.08 5.00 -0.96 5.91 15.40 4118 1.03 4127 4144 4154 1.04 1 1
theta[8] 4.78 0.08 5.33 -3.54 4.64 13.48 3968 0.99 3989 4014 4040 1.01 1 1
lp__ -14.42 0.12 6.99 -25.06 -15.02 -1.64 3505 0.88 3563 3686 3689 0.92 1 1
zRhat zsRhat zfsRhat zfsseff zfsreff tailseff tailreff medsseff medsreff madsseff madsreff
mu 1 1 1 3655 0.91 3648 0.91 4193 1.05 3805 0.95
tau 1 1 1 4095 1.02 2447 0.61 3955 0.99 3815 0.95
theta[1] 1 1 1 4200 1.05 3691 0.92 4115 1.03 3902 0.98
theta[2] 1 1 1 4049 1.01 3946 0.99 3556 0.89 4059 1.01
theta[3] 1 1 1 3810 0.95 3819 0.95 4075 1.02 3810 0.95
theta[4] 1 1 1 3919 0.98 4188 1.05 3496 0.87 3885 0.97
theta[5] 1 1 1 3582 0.90 3839 0.96 3834 0.96 3842 0.96
theta[6] 1 1 1 3882 0.97 4059 1.01 4005 1.00 3820 0.96
theta[7] 1 1 1 4061 1.02 3813 0.95 4259 1.06 3871 0.97
theta[8] 1 1 1 3750 0.94 3968 0.99 4139 1.03 3826 0.96
lp__ 1 1 1 3192 0.80 2616 0.65 3835 0.96 3914 0.98
Various diagnostic plots of tau look reasonable as well.
plot_local_ess(fit = fit_cp4, par = "tau", nalpha = 100)
plot_quantile_ess(fit = fit_cp4, par = "tau", nalpha = 100)
plot_change_ess(fit = fit_cp4, par = "tau")
However, the rank plots seem still to show the problem.
samp_cp4 <- as.array(fit_cp4)
mcmc_hist_r_scale(samp_cp4[, , "tau"])
In the following, we want to expand our understanding of the non-centered parameterization of the hierarchical model fit to the eight schools data.
writeLines(readLines("eight_schools_ncp.stan"))
data {
int<lower=0> J;
real y[J];
real<lower=0> sigma[J];
}
parameters {
real mu;
real<lower=0> tau;
real theta_tilde[J];
}
transformed parameters {
real theta[J];
for (j in 1:J)
theta[j] = mu + tau * theta_tilde[j];
}
model {
mu ~ normal(0, 5);
tau ~ cauchy(0, 5);
theta_tilde ~ normal(0, 1);
y ~ normal(theta, sigma);
}
In the main text, we have already seen that the non-centered parameterization works better than the centered parameterization, at least when we use an increased adapt_delta value. Let’s see what happens when using the default MCMC option of Stan.
fit_ncp <- stan(
file = 'eight_schools_ncp.stan', data = eight_schools,
iter = 2000, chains = 4, seed = 483892929, refresh = 0
)
Warning: There were 2 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
Warning: Examine the pairs() plot to diagnose sampling problems
We observe a few divergent transitions with the default of adapt_delta=0.8. Let’s analyze the sample.
monitor(fit_ncp)
Inference for the input samples (4 chains: each with iter = 2000; warmup = 1000):
Q5 Q50 Q95 Mean SD Rhat Bulk_ESS Tail_ESS
mu -0.98 4.41 9.52 4.38 3.24 1 4083 2378
tau 0.25 2.77 9.77 3.61 3.16 1 2303 1795
theta_tilde[1] -1.31 0.35 1.88 0.32 0.97 1 4571 2604
theta_tilde[2] -1.41 0.14 1.64 0.12 0.92 1 5771 3078
theta_tilde[3] -1.62 -0.10 1.49 -0.09 0.96 1 4966 3054
theta_tilde[4] -1.43 0.03 1.51 0.05 0.91 1 5442 2830
theta_tilde[5] -1.67 -0.17 1.35 -0.16 0.91 1 4273 3005
theta_tilde[6] -1.64 -0.08 1.48 -0.07 0.95 1 5192 2981
theta_tilde[7] -1.25 0.39 1.88 0.36 0.97 1 3898 2800
theta_tilde[8] -1.51 0.07 1.68 0.08 0.97 1 4848 2863
theta[1] -1.38 5.68 15.84 6.27 5.60 1 3790 2549
theta[2] -2.29 4.88 12.83 5.03 4.62 1 5002 2920
theta[3] -4.28 4.08 11.90 3.95 5.24 1 4001 3036
theta[4] -2.74 4.66 12.09 4.64 4.63 1 4699 3063
theta[5] -4.13 3.89 10.45 3.63 4.54 1 4310 3184
theta[6] -4.11 4.19 11.30 3.95 4.88 1 4965 2806
theta[7] -0.84 5.86 15.18 6.28 4.94 1 4599 3296
theta[8] -3.24 4.77 13.52 4.91 5.37 1 4461 3288
lp__ -11.06 -6.47 -3.68 -6.81 2.30 1 1711 2385
For each parameter, Bulk_ESS and Tail_ESS are crude measures of
effective sample size for bulk and tail quantities respectively (good values is
ESS > 400), and Rhat is the potential scale reduction factor on rank normalized
split chains (at convergence, Rhat = 1).
res <- monitor_extra(fit_ncp)
print(res)
Inference for the input samples (4 chains: each with iter = 2000; warmup = 1000):
mean se_mean sd Q5 Q50 Q95 seff reff sseff zseff zsseff zsreff Rhat
mu 4.38 0.05 3.24 -0.98 4.41 9.52 4023 1.01 4036 4070 4083 1.02 1
tau 3.61 0.06 3.16 0.25 2.77 9.77 2684 0.67 2700 2293 2303 0.58 1
theta_tilde[1] 0.32 0.01 0.97 -1.31 0.35 1.88 4528 1.13 4566 4532 4571 1.14 1
theta_tilde[2] 0.12 0.01 0.92 -1.41 0.14 1.64 5742 1.44 5758 5754 5771 1.44 1
theta_tilde[3] -0.09 0.01 0.96 -1.62 -0.10 1.49 4929 1.23 4974 4919 4966 1.24 1
theta_tilde[4] 0.05 0.01 0.91 -1.43 0.03 1.51 5370 1.34 5436 5376 5442 1.36 1
theta_tilde[5] -0.16 0.01 0.91 -1.67 -0.17 1.35 4222 1.06 4269 4226 4273 1.07 1
theta_tilde[6] -0.07 0.01 0.95 -1.64 -0.08 1.48 5182 1.30 5195 5179 5192 1.30 1
theta_tilde[7] 0.36 0.02 0.97 -1.25 0.39 1.88 3885 0.97 3888 3893 3898 0.97 1
theta_tilde[8] 0.08 0.01 0.97 -1.51 0.07 1.68 4838 1.21 4853 4834 4848 1.21 1
theta[1] 6.27 0.09 5.60 -1.38 5.68 15.84 3504 0.88 3530 3767 3790 0.95 1
theta[2] 5.03 0.07 4.62 -2.29 4.88 12.83 4892 1.22 4928 4965 5002 1.25 1
theta[3] 3.95 0.09 5.24 -4.28 4.08 11.90 3796 0.95 3825 3971 4001 1.00 1
theta[4] 4.64 0.07 4.63 -2.74 4.66 12.09 4554 1.14 4577 4676 4699 1.17 1
theta[5] 3.63 0.07 4.54 -4.13 3.89 10.45 4126 1.03 4174 4258 4310 1.08 1
theta[6] 3.95 0.07 4.88 -4.11 4.19 11.30 4726 1.18 4815 4922 4965 1.24 1
theta[7] 6.28 0.07 4.94 -0.84 5.86 15.18 4416 1.10 4423 4524 4599 1.15 1
theta[8] 4.91 0.08 5.37 -3.24 4.77 13.52 4046 1.01 4066 4439 4461 1.12 1
lp__ -6.81 0.06 2.30 -11.06 -6.47 -3.68 1678 0.42 1684 1704 1711 0.43 1
sRhat zRhat zsRhat zfsRhat zfsseff zfsreff tailseff tailreff medsseff medsreff
mu 1 1 1 1 1775 0.44 2378 0.59 4237 1.06
tau 1 1 1 1 3132 0.78 1795 0.45 3151 0.79
theta_tilde[1] 1 1 1 1 2141 0.54 2604 0.65 4489 1.12
theta_tilde[2] 1 1 1 1 1984 0.50 3078 0.77 5350 1.34
theta_tilde[3] 1 1 1 1 2136 0.53 3054 0.76 5397 1.35
theta_tilde[4] 1 1 1 1 2008 0.50 2830 0.71 4821 1.21
theta_tilde[5] 1 1 1 1 2263 0.57 3005 0.75 4282 1.07
theta_tilde[6] 1 1 1 1 2078 0.52 2981 0.75 4760 1.19
theta_tilde[7] 1 1 1 1 2260 0.56 2800 0.70 3823 0.96
theta_tilde[8] 1 1 1 1 1905 0.48 2863 0.72 4351 1.09
theta[1] 1 1 1 1 2528 0.63 2549 0.64 4358 1.09
theta[2] 1 1 1 1 2302 0.58 2920 0.73 4229 1.06
theta[3] 1 1 1 1 2553 0.64 3036 0.76 4002 1.00
theta[4] 1 1 1 1 2654 0.66 3063 0.77 4548 1.14
theta[5] 1 1 1 1 2783 0.70 3184 0.80 4523 1.13
theta[6] 1 1 1 1 2666 0.67 2806 0.70 4759 1.19
theta[7] 1 1 1 1 2493 0.62 3296 0.82 4315 1.08
theta[8] 1 1 1 1 2472 0.62 3288 0.82 4400 1.10
lp__ 1 1 1 1 2487 0.62 2385 0.60 1990 0.50
madsseff madsreff
mu 2343 0.59
tau 3171 0.79
theta_tilde[1] 2491 0.62
theta_tilde[2] 2406 0.60
theta_tilde[3] 2447 0.61
theta_tilde[4] 2419 0.60
theta_tilde[5] 2692 0.67
theta_tilde[6] 2412 0.60
theta_tilde[7] 2783 0.70
theta_tilde[8] 2000 0.50
theta[1] 2788 0.70
theta[2] 2542 0.64
theta[3] 2971 0.74
theta[4] 2859 0.71
theta[5] 2689 0.67
theta[6] 3152 0.79
theta[7] 2911 0.73
theta[8] 2644 0.66
lp__ 2796 0.70
All Rhats are close to 1, and ESSs are good despite a few divergent transitions. Small interval and quantile plots of tau reveal some sampling problems for small tau values, but not nearly as strong as for the centered parameterization.
plot_local_ess(fit = fit_ncp, par = "tau", nalpha = 20)
plot_quantile_ess(fit = fit_ncp, par = "tau", nalpha = 40)
Overall, the non-centered parameterization looks good even for the default settings of adapt_delta, and increasing it to 0.95 gets rid of the last remaining problems. This stands in sharp contrast to what we observed for the centered parameterization, where increasing adapt_delta didn’t help at all. Actually, this is something we observe quite often: A suboptimal parameterization can cause problems that are not simply solved by tuning the sampler. Instead, we have to adjust our model to achieve trustworthy inference.
We will also run the centered and non-centered parameterizations of the eight schools model with Jags.
The Jags code for the centered eight schools model looks as follows:
writeLines(readLines("eight_schools_cp.bugs"))
model {
for (j in 1:J) {
sigma_prec[j] <- pow(sigma[j], -2)
theta[j] ~ dnorm(mu, tau_prec)
y[j] ~ dnorm(theta[j], sigma_prec[j])
}
mu ~ dnorm(0, pow(5, -2))
tau ~ dt(0, pow(5, -2), 1)T(0, )
tau_prec <- pow(tau, -2)
}
First, we initialize the Jags model for reusage later.
jags_cp <- jags.model(
"eight_schools_cp.bugs",
data = eight_schools,
n.chains = 4, n.adapt = 10000
)
Compiling model graph
Resolving undeclared variables
Allocating nodes
Graph information:
Observed stochastic nodes: 8
Unobserved stochastic nodes: 10
Total graph size: 40
Initializing model
Next, we sample 1000 iterations for each of the 4 chains for easy comparison with the corresponding Stan results.
samp_jags_cp <- coda.samples(
jags_cp, c("theta", "mu", "tau"),
n.iter = 1000
)
samp_jags_cp <- aperm(abind(samp_jags_cp, along = 3), c(1, 3, 2))
Convergence diagnostics indicate problems in the sampling of mu and tau, but also to a lesser degree in all other paramters.
mon <- monitor(samp_jags_cp)
print(mon)
Inference for the input samples (4 chains: each with iter = 1000; warmup = 0):
Q5 Q50 Q95 Mean SD Rhat Bulk_ESS Tail_ESS
mu -0.84 4.34 9.88 4.35 3.23 1.03 137 140
tau 0.25 2.91 10.47 3.79 3.33 1.05 94 124
theta[1] -1.30 5.70 16.21 6.26 5.59 1.01 298 694
theta[2] -2.45 4.96 12.87 4.97 4.81 1.01 314 1210
theta[3] -5.04 4.14 11.37 3.82 5.28 1.02 265 1012
theta[4] -2.70 4.74 12.29 4.77 4.81 1.01 306 1366
theta[5] -4.82 3.62 10.40 3.45 4.78 1.02 206 638
theta[6] -4.42 4.16 11.36 3.97 4.88 1.01 323 820
theta[7] -0.66 6.05 15.70 6.49 5.22 1.01 248 825
theta[8] -3.66 4.81 13.50 4.85 5.39 1.01 290 1050
For each parameter, Bulk_ESS and Tail_ESS are crude measures of
effective sample size for bulk and tail quantities respectively (good values is
ESS > 400), and Rhat is the potential scale reduction factor on rank normalized
split chains (at convergence, Rhat = 1).
We also see problems in the sampling of tau using various diagnostic plots.
plot_local_ess(samp_jags_cp, par = "tau", nalpha = 20)
plot_quantile_ess(samp_jags_cp, par = "tau", nalpha = 20)
plot_change_ess(samp_jags_cp, par = "tau")
Let’s see what happens if we run 10 times longer chains.
samp_jags_cp <- coda.samples(
jags_cp, c("theta", "mu", "tau"),
n.iter = 10000
)
samp_jags_cp <- aperm(abind(samp_jags_cp, along = 3), c(1, 3, 2))
Convergence looks better now, although tau is still estimated not very efficiently.
mon <- monitor(samp_jags_cp)
print(mon)
Inference for the input samples (4 chains: each with iter = 10000; warmup = 0):
Q5 Q50 Q95 Mean SD Rhat Bulk_ESS Tail_ESS
mu -0.85 4.51 9.80 4.48 3.25 1 1460 2988
tau 0.22 2.74 9.68 3.58 3.22 1 623 716
theta[1] -1.31 5.68 16.26 6.30 5.52 1 2316 5164
theta[2] -2.24 4.92 12.61 5.03 4.62 1 2685 9628
theta[3] -4.72 4.27 11.68 3.99 5.24 1 2566 6820
theta[4] -2.67 4.79 12.36 4.83 4.70 1 2643 9004
theta[5] -4.33 4.02 10.61 3.72 4.61 1 2269 7165
theta[6] -3.90 4.37 11.43 4.15 4.80 1 2560 9185
theta[7] -0.74 5.87 15.23 6.37 4.97 1 1987 5436
theta[8] -3.18 4.82 13.32 4.90 5.23 1 2799 8888
For each parameter, Bulk_ESS and Tail_ESS are crude measures of
effective sample size for bulk and tail quantities respectively (good values is
ESS > 400), and Rhat is the potential scale reduction factor on rank normalized
split chains (at convergence, Rhat = 1).
The diagnostic plots of quantiles and small intervals tell a similar story.
plot_local_ess(samp_jags_cp, par = "tau", nalpha = 20)
plot_quantile_ess(samp_jags_cp, par = "tau", nalpha = 20)
Notably, however, the increase in effective sample size of tau is linear in the total number of draws indicating that convergence for tau may be achieved by simply running longer chains.
plot_change_ess(samp_jags_cp, par = "tau")
Result: Similar to Stan, Jags also has convergence problems with the centered parameterization of the eight schools model.
The Jags code for the non-centered eight schools model looks as follows:
writeLines(readLines("eight_schools_ncp.bugs"))
model {
for (j in 1:J) {
sigma_prec[j] <- pow(sigma[j], -2)
theta_tilde[j] ~ dnorm(0, 1)
theta[j] = mu + tau * theta_tilde[j]
y[j] ~ dnorm(theta[j], sigma_prec[j])
}
mu ~ dnorm(0, pow(5, -2))
tau ~ dt(0, pow(5, -2), 1)T(0, )
}
First, we initialize the Jags model for reusage later.
jags_ncp <- jags.model(
"eight_schools_ncp.bugs",
data = eight_schools,
n.chains = 4, n.adapt = 10000
)
Compiling model graph
Resolving undeclared variables
Allocating nodes
Graph information:
Observed stochastic nodes: 8
Unobserved stochastic nodes: 10
Total graph size: 55
Initializing model
Next, we sample 1000 iterations for each of the 4 chains for easy comparison with the corresponding Stan results.
samp_jags_ncp <- coda.samples(
jags_ncp, c("theta", "mu", "tau"),
n.iter = 1000
)
samp_jags_ncp <- aperm(abind(samp_jags_ncp, along = 3), c(1, 3, 2))
Convergence diagnostics indicate much better mixing than for the centered eight school model.
mon <- monitor(samp_jags_ncp)
print(mon)
Inference for the input samples (4 chains: each with iter = 1000; warmup = 0):
Q5 Q50 Q95 Mean SD Rhat Bulk_ESS Tail_ESS
mu -1.01 4.44 9.83 4.41 3.28 1 2972 3133
tau 0.26 2.84 9.72 3.68 3.10 1 1118 1395
theta[1] -1.56 5.75 15.82 6.22 5.42 1 3201 2579
theta[2] -2.59 4.95 12.43 5.00 4.66 1 4147 3379
theta[3] -4.89 4.12 11.70 3.85 5.24 1 3631 3055
theta[4] -3.15 4.80 12.63 4.75 4.79 1 4072 3528
theta[5] -4.66 3.92 10.64 3.57 4.78 1 3293 3099
theta[6] -3.82 4.35 11.66 4.10 4.83 1 3527 3410
theta[7] -0.82 5.96 15.09 6.33 4.95 1 2920 2898
theta[8] -3.43 4.82 13.62 4.88 5.30 1 3752 3300
For each parameter, Bulk_ESS and Tail_ESS are crude measures of
effective sample size for bulk and tail quantities respectively (good values is
ESS > 400), and Rhat is the potential scale reduction factor on rank normalized
split chains (at convergence, Rhat = 1).
Specifically, the mixing of tau looks much better although we still see some problems in the estimation of larger quantiles.
plot_local_ess(samp_jags_ncp, par = "tau", nalpha = 20)
plot_quantile_ess(samp_jags_ncp, par = "tau", nalpha = 20)
Change in effective sample size is roughly linear indicating that some remaining convergence problems are likely to be solved by running longer chains.
plot_change_ess(samp_jags_ncp, par = "tau")
Result: Similar to Stan, Jags can sample from the non-centered parameterization of the eight schools model much better than from the centered parameterization.
We will illustrate the rank normalization with a few examples. First, we plot histograms, and empirical cumulative distribution functions (ECDF) with respect to the original parameter values (\(\theta\)), scaled ranks (ranks divided by the maximum rank), and rank normalized values (z). We used scaled ranks to make the plots look similar for different number of draws.
100 draws from Normal(0, 1):
n <- 100
theta <- rnorm(n)
plot_ranknorm(theta, n)
100 draws from Exponential(1):
theta <- rexp(n)
plot_ranknorm(theta, n)
100 draws from Cauchy(0, 1):
theta <- rcauchy(n)
plot_ranknorm(theta, n)
In the above plots, the ECDF with respect to scaled rank and rank normalized \(z\)-values look exactly the same for all distributions. In Split-\(\widehat{R}\) and effective sample size computations, we rank all draws jointly, but then compare ranks and ECDF of individual split chains. To illustrate the variation between chains, we draw 8 batches of 100 draws each from Normal(0, 1):
n <- 100
m <- 8
theta <- rnorm(n * m)
plot_ranknorm(theta, n, m)
The variation in ECDF due to the variation ranks is now visible also in scaled ranks and rank normalized \(z\)-values from different batches (chains).
The benefit of rank normalization is more obvious for non-normal distribution such as Cauchy:
theta <- rcauchy(n * m)
plot_ranknorm(theta, n, m)
Rank normalization makes the subsequent computations well defined and invariant under bijective transformations. This means that we get the same results, for example, if we use unconstrained or constrained parameterisations in a model.
In the paper, we had defined the empirical CDF (ECDF) for any \(\theta_\alpha\) as \[ p(\theta \leq \theta_\alpha) \approx \bar{I}_\alpha = \frac{1}{S}\sum_{s=1}^S I(\theta^{(s)} \leq\theta_\alpha), \]
For independent draws, \(\bar{I}_\alpha\) has a \({\rm Beta}(\bar{I}_\alpha+1, S - \bar{I}_\alpha + 1)\) distribution. Thus we can easily examine the variation of the ECDF for any \(\theta_\alpha\) value from a single chain. If \(\bar{I}_\alpha\) is not very close to \(1\) or \(S\) and \(S\) is large, we can use the variance of Beta distribution
\[ {\rm Var}[p(\theta \leq \theta_\alpha)] = \frac{(\bar{I}_\alpha+1)*(S-\bar{I}_\alpha+1)}{(S+2)^2(S+3)}. \] We illustrate uncertainty intervals of the Beta distribution and normal approximation of ECDF for 100 draws from Normal(0, 1):
n <- 100
m <- 1
theta <- rnorm(n * m)
plot_ranknorm(theta, n, m, interval = TRUE)
Uncertainty intervals of ECDF for draws from Cauchy(0, 1) illustrate again the improved visual clarity in plotting when using scaled ranks:
n <- 100
m <- 1
theta <- rcauchy(n * m)
plot_ranknorm(theta, n, m, interval = TRUE)
The above plots illustrate that the normal approximation is accurate for practical purposes in MCMC diagnostics.
We have already seen that the effective sample size of dynamic HMC can be higher than with independent draws. The next example illustrates interesting relative efficiency phenomena due to the properties of dynamic HMC algorithms.
We sample from a simple 16-dimensional standard normal model.
writeLines(readLines("normal.stan"))
data {
int<lower=1> J;
}
parameters {
vector[J] x;
}
model {
x ~ normal(0, 1);
}
fit_n <- stan(
file = 'normal.stan', data = data.frame(J = 16),
iter = 20000, chains = 4, seed = 483892929, refresh = 0
)
samp <- as.array(fit_n)
monitor(samp)
Inference for the input samples (4 chains: each with iter = 10000; warmup = 0):
Q5 Q50 Q95 Mean SD Rhat Bulk_ESS Tail_ESS
x[1] -1.66 0.00 1.65 0.00 1.00 1 98264 28709
x[2] -1.64 -0.01 1.64 0.00 1.00 1 95812 29664
x[3] -1.63 0.00 1.62 0.00 0.99 1 98640 28669
x[4] -1.65 0.00 1.66 0.01 1.01 1 97302 29166
x[5] -1.64 0.00 1.63 0.00 1.00 1 101542 29930
x[6] -1.65 0.00 1.65 0.00 1.00 1 96292 28376
x[7] -1.63 0.01 1.63 0.00 0.99 1 96016 29238
x[8] -1.65 -0.01 1.65 0.00 1.00 1 100375 29893
x[9] -1.64 0.01 1.65 0.00 1.00 1 101141 28621
x[10] -1.62 -0.01 1.63 0.00 0.99 1 103126 29411
x[11] -1.65 0.01 1.66 0.00 1.00 1 95886 28488
x[12] -1.62 0.00 1.63 0.01 0.99 1 98433 29228
x[13] -1.62 0.01 1.65 0.00 0.99 1 98181 27421
x[14] -1.63 0.00 1.63 0.00 0.99 1 97313 27507
x[15] -1.63 0.01 1.64 0.01 0.99 1 95223 29139
x[16] -1.66 0.00 1.65 0.00 1.01 1 99980 29639
lp__ -13.00 -7.66 -3.92 -7.95 2.79 1 14489 19627
For each parameter, Bulk_ESS and Tail_ESS are crude measures of
effective sample size for bulk and tail quantities respectively (good values is
ESS > 400), and Rhat is the potential scale reduction factor on rank normalized
split chains (at convergence, Rhat = 1).
res <- monitor_extra(samp)
print(res)
Inference for the input samples (4 chains: each with iter = 10000; warmup = 0):
mean se_mean sd Q5 Q50 Q95 seff reff sseff zseff zsseff zsreff Rhat sRhat
x[1] 0.00 0.00 1.00 -1.66 0.00 1.65 97846 2.45 98426 97687 98264 2.46 1 1
x[2] 0.00 0.00 1.00 -1.64 -0.01 1.64 95437 2.39 95717 95531 95812 2.40 1 1
x[3] 0.00 0.00 0.99 -1.63 0.00 1.62 98375 2.46 98703 98313 98640 2.47 1 1
x[4] 0.01 0.00 1.01 -1.65 0.00 1.66 96642 2.42 97290 96654 97302 2.43 1 1
x[5] 0.00 0.00 1.00 -1.64 0.00 1.63 101317 2.53 101514 101344 101542 2.54 1 1
x[6] 0.00 0.00 1.00 -1.65 0.00 1.65 96011 2.40 96299 96003 96292 2.41 1 1
x[7] 0.00 0.00 0.99 -1.63 0.01 1.63 95558 2.39 96082 95492 96016 2.40 1 1
x[8] 0.00 0.00 1.00 -1.65 -0.01 1.65 99937 2.50 100392 99920 100375 2.51 1 1
x[9] 0.00 0.00 1.00 -1.64 0.01 1.65 100824 2.52 101134 100831 101141 2.53 1 1
x[10] 0.00 0.00 0.99 -1.62 -0.01 1.63 102178 2.55 102983 102317 103126 2.58 1 1
x[11] 0.00 0.00 1.00 -1.65 0.01 1.66 95226 2.38 95863 95250 95886 2.40 1 1
x[12] 0.01 0.00 0.99 -1.62 0.00 1.63 97828 2.45 98357 97903 98433 2.46 1 1
x[13] 0.00 0.00 0.99 -1.62 0.01 1.65 97666 2.44 98166 97682 98181 2.45 1 1
x[14] 0.00 0.00 0.99 -1.63 0.00 1.63 96805 2.42 97327 96792 97313 2.43 1 1
x[15] 0.01 0.00 0.99 -1.63 0.01 1.64 94911 2.37 95185 94950 95223 2.38 1 1
x[16] 0.00 0.00 1.01 -1.66 0.00 1.65 99541 2.49 100111 99413 99980 2.50 1 1
lp__ -7.95 0.02 2.79 -13.00 -7.66 -3.92 14922 0.37 14934 14480 14489 0.36 1 1
zRhat zsRhat zfsRhat zfsseff zfsreff tailseff tailreff medsseff medsreff madsseff madsreff
x[1] 1 1 1 16450 0.41 28709 0.72 82432 2.06 19205 0.48
x[2] 1 1 1 16462 0.41 29664 0.74 75494 1.89 19208 0.48
x[3] 1 1 1 16152 0.40 28669 0.72 78630 1.97 18732 0.47
x[4] 1 1 1 16075 0.40 29166 0.73 81148 2.03 19079 0.48
x[5] 1 1 1 16785 0.42 29930 0.75 79953 2.00 20116 0.50
x[6] 1 1 1 16578 0.41 28376 0.71 79018 1.98 19626 0.49
x[7] 1 1 1 17109 0.43 29238 0.73 81690 2.04 19543 0.49
x[8] 1 1 1 16400 0.41 29893 0.75 79263 1.98 18828 0.47
x[9] 1 1 1 15890 0.40 28621 0.72 81119 2.03 18683 0.47
x[10] 1 1 1 16460 0.41 29411 0.74 76948 1.92 19242 0.48
x[11] 1 1 1 15969 0.40 28488 0.71 79164 1.98 18329 0.46
x[12] 1 1 1 15294 0.38 29228 0.73 81841 2.05 18720 0.47
x[13] 1 1 1 15370 0.38 27421 0.69 80615 2.02 18210 0.46
x[14] 1 1 1 16452 0.41 27507 0.69 77592 1.94 19050 0.48
x[15] 1 1 1 16651 0.42 29139 0.73 80406 2.01 19622 0.49
x[16] 1 1 1 16395 0.41 29639 0.74 82347 2.06 18845 0.47
lp__ 1 1 1 21484 0.54 19627 0.49 17074 0.43 23603 0.59
The Bulk-ESS for all \(x\) is larger than 9.522310^{4}. However tail-ESS for all \(x\) is less than 2.99310^{4}. Further, bulk-ESS for lp__ is only 1.448910^{4}.
If we take a look at all the Stan examples in this notebook, we see that the bulk-ESS for lp__ is always below 0.5. This is because lp__ correlates strongly with the total energy in HMC, which is sampled using a random walk proposal once per iteration. Thus, it’s likely that lp__ has some random walk behavior, as well, leading to autocorrelation and a small relative efficiency. At the same time, adaptive HMC can create antithetic Markov chains which have negative auto-correlations at odd lags. This results in a bulk-ESS greater than S for some parameters.
Let’s check the effective sample size in different parts of the posterior by computing the effective sample size for small interval estimates for x[1].
plot_local_ess(fit_n, par = 1, nalpha = 100)
The effective sample size for probability estimate for a small interval is close to 1 with a slight drop in the tails. This is a good result, but far from the effective sample size for the bulk, mean, and median estimates. Let’s check the effective sample size for quantiles.
plot_quantile_ess(fit = fit_n, par = 1, nalpha = 100)
Central quantile estimates have higher effective sample size than tail quantile estimates.
The total energy of HMC should affect how far in the tails a chain in one iteration can go. Fat tails of the target have high energy, and thus only chains with high total energy can reach there. This will suggest that the random walk in total energy would cause random walk in the variance of \(x\). Let’s check the second moment of \(x\).
samp_x2 <- as.array(fit_n, pars = "x")^2
monitor(samp_x2)
Inference for the input samples (4 chains: each with iter = 10000; warmup = 0):
Q5 Q50 Q95 Mean SD Rhat Bulk_ESS Tail_ESS
x[1] 0 0.46 3.85 1.01 1.44 1 16443 18225
x[2] 0 0.44 3.80 0.99 1.42 1 16492 19392
x[3] 0 0.45 3.80 0.98 1.39 1 16148 18342
x[4] 0 0.45 3.95 1.01 1.46 1 16070 18288
x[5] 0 0.45 3.86 1.00 1.42 1 16785 18672
x[6] 0 0.45 3.91 1.00 1.42 1 16572 17525
x[7] 0 0.45 3.74 0.99 1.39 1 17097 19120
x[8] 0 0.46 3.80 1.00 1.42 1 16397 18152
x[9] 0 0.45 3.81 1.00 1.41 1 15922 18049
x[10] 0 0.44 3.73 0.98 1.39 1 16461 18098
x[11] 0 0.46 3.85 1.00 1.41 1 16008 19463
x[12] 0 0.45 3.75 0.99 1.41 1 15368 17674
x[13] 0 0.44 3.83 0.98 1.38 1 15371 16755
x[14] 0 0.45 3.75 0.98 1.37 1 16461 17715
x[15] 0 0.45 3.77 0.98 1.38 1 16655 19241
x[16] 0 0.47 3.86 1.01 1.41 1 16400 19741
For each parameter, Bulk_ESS and Tail_ESS are crude measures of
effective sample size for bulk and tail quantities respectively (good values is
ESS > 400), and Rhat is the potential scale reduction factor on rank normalized
split chains (at convergence, Rhat = 1).
res <- monitor_extra(samp_x2)
print(res)
Inference for the input samples (4 chains: each with iter = 10000; warmup = 0):
mean se_mean sd Q5 Q50 Q95 seff reff sseff zseff zsseff zsreff Rhat sRhat zRhat zsRhat
x[1] 1.01 0.01 1.44 0 0.46 3.85 14657 0.37 14664 16440 16443 0.41 1 1 1 1
x[2] 0.99 0.01 1.42 0 0.44 3.80 15511 0.39 15518 16488 16492 0.41 1 1 1 1
x[3] 0.98 0.01 1.39 0 0.45 3.80 14684 0.37 14703 16133 16148 0.40 1 1 1 1
x[4] 1.01 0.01 1.46 0 0.45 3.95 14510 0.36 14518 16039 16070 0.40 1 1 1 1
x[5] 1.00 0.01 1.42 0 0.45 3.86 15017 0.38 15029 16766 16785 0.42 1 1 1 1
x[6] 1.00 0.01 1.42 0 0.45 3.91 14368 0.36 14380 16547 16572 0.41 1 1 1 1
x[7] 0.99 0.01 1.39 0 0.45 3.74 15464 0.39 15488 17080 17097 0.43 1 1 1 1
x[8] 1.00 0.01 1.42 0 0.46 3.80 14773 0.37 14766 16397 16397 0.41 1 1 1 1
x[9] 1.00 0.01 1.41 0 0.45 3.81 13439 0.34 13467 15910 15922 0.40 1 1 1 1
x[10] 0.98 0.01 1.39 0 0.44 3.73 14654 0.37 14672 16430 16461 0.41 1 1 1 1
x[11] 1.00 0.01 1.41 0 0.46 3.85 15068 0.38 15098 15997 16008 0.40 1 1 1 1
x[12] 0.99 0.01 1.41 0 0.45 3.75 14215 0.36 14221 15342 15368 0.38 1 1 1 1
x[13] 0.98 0.01 1.38 0 0.44 3.83 13548 0.34 13547 15368 15371 0.38 1 1 1 1
x[14] 0.98 0.01 1.37 0 0.45 3.75 14547 0.36 14565 16418 16461 0.41 1 1 1 1
x[15] 0.98 0.01 1.38 0 0.45 3.77 15417 0.39 15414 16652 16655 0.42 1 1 1 1
x[16] 1.01 0.01 1.41 0 0.47 3.86 15551 0.39 15561 16390 16400 0.41 1 1 1 1
zfsRhat zfsseff zfsreff tailseff tailreff medsseff medsreff madsseff madsreff
x[1] 1 18445 0.46 18225 0.46 19172 0.48 23268 0.58
x[2] 1 19699 0.49 19392 0.48 19309 0.48 24908 0.62
x[3] 1 18532 0.46 18342 0.46 18694 0.47 23865 0.60
x[4] 1 18983 0.47 18288 0.46 19156 0.48 23907 0.60
x[5] 1 19535 0.49 18672 0.47 20102 0.50 25043 0.63
x[6] 1 17535 0.44 17525 0.44 19596 0.49 22613 0.57
x[7] 1 19019 0.48 19120 0.48 19555 0.49 23336 0.58
x[8] 1 18920 0.47 18152 0.45 18816 0.47 24195 0.60
x[9] 1 18386 0.46 18049 0.45 18674 0.47 22198 0.55
x[10] 1 18570 0.46 18098 0.45 19147 0.48 23900 0.60
x[11] 1 19482 0.49 19463 0.49 18393 0.46 24709 0.62
x[12] 1 18691 0.47 17674 0.44 18588 0.46 23963 0.60
x[13] 1 18506 0.46 16755 0.42 18258 0.46 24132 0.60
x[14] 1 18823 0.47 17715 0.44 19062 0.48 23428 0.59
x[15] 1 19633 0.49 19241 0.48 19620 0.49 24476 0.61
x[16] 1 20083 0.50 19741 0.49 18829 0.47 24441 0.61
The mean of the bulk-ESS for \(x_j^2\) is 1.62906210^{4}, which is quite close to the bulk-ESS for lp__. This is not that surprising as the potential energy in normal model is proportional to \(\sum_{j=1}^J x_j^2\).
Let’s check the effective sample size in different parts of the posterior by computing the effective sample size for small interval probability estimates for x[1]^2.
plot_local_ess(fit = samp_x2, par = 1, nalpha = 100)
The effective sample size is mostly a bit below 1, but for the right tail of \(x_1^2\) the effective sample size drops. This is likely due to only some iterations having high enough total energy to obtain draws from the high energy part of the tail. Let’s check the effective sample size for quantiles.
plot_quantile_ess(fit = samp_x2, par = 1, nalpha = 100)
We can see the correlation between lp__ and magnitude of x[1] in the following plot.
samp <- as.array(fit_n)
qplot(
as.vector(samp[, , "lp__"]),
abs(as.vector(samp[, , "x[1]"]))
) +
labs(x = 'lp__', y = 'x[1]')
Low lp__ values corresponds to high energy and more variation in x[1], and high lp__ corresponds to low energy and small variation in x[1]. Finally \(\sum_{j=1}^J x_j^2\) is perfectly correlated with lp__.
qplot(
as.vector(samp[, , "lp__"]),
as.vector(apply(samp[, , 1:16]^2, 1:2, sum))
) +
labs(x = 'lp__', y = 'sum(x^2)')
This shows that even if we get high effective sample size estimates for central quantities (like mean or median), it is important to look at the relative efficiency of scale and tail quantities, as well. The effective sample size of lp__ can also indicate problems of sampling in the tails.
makevars <- file.path(Sys.getenv("HOME"), ".R/Makevars")
if (file.exists(makevars)) {
writeLines(readLines(makevars))
}
CXX14FLAGS=-O3 -Wno-unused-variable -Wno-unused-function
CXX14 = $(BINPREF)g++ -m$(WIN) -std=c++1y
CXX11FLAGS=-O3 -Wno-unused-variable -Wno-unused-function
devtools::session_info("rstan")
- Session info -----------------------------------------------------------------------------------
setting value
version R version 3.5.2 (2018-12-20)
os Windows 10 x64
system x86_64, mingw32
ui RTerm
language (EN)
collate German_Germany.1252
ctype German_Germany.1252
tz Europe/Berlin
date 2019-03-12
- Packages ---------------------------------------------------------------------------------------
package * version date lib source
assertthat 0.2.0 2017-04-11 [1] CRAN (R 3.5.0)
backports 1.1.3 2018-12-14 [1] CRAN (R 3.5.1)
BH 1.69.0-1 2019-01-07 [1] CRAN (R 3.5.2)
callr 3.1.1 2018-12-21 [1] CRAN (R 3.5.2)
checkmate 1.9.1 2019-01-15 [1] CRAN (R 3.5.2)
cli 1.0.1 2018-09-25 [1] CRAN (R 3.5.1)
colorspace 1.4-0 2019-01-13 [1] CRAN (R 3.5.2)
crayon 1.3.4 2017-09-16 [1] CRAN (R 3.5.0)
desc 1.2.0 2018-05-01 [1] CRAN (R 3.5.0)
digest 0.6.18 2018-10-10 [1] CRAN (R 3.5.1)
fansi 0.4.0 2018-10-05 [1] CRAN (R 3.5.1)
ggplot2 * 3.1.0 2018-10-25 [1] CRAN (R 3.5.1)
glue 1.3.0 2018-07-17 [1] CRAN (R 3.5.1)
gridExtra * 2.3 2017-09-09 [1] CRAN (R 3.5.0)
gtable 0.2.0 2016-02-26 [1] CRAN (R 3.5.0)
inline 0.3.15 2018-05-18 [1] CRAN (R 3.5.1)
labeling 0.3 2014-08-23 [1] CRAN (R 3.5.0)
lattice 0.20-38 2018-11-04 [2] CRAN (R 3.5.2)
lazyeval 0.2.1 2017-10-29 [1] CRAN (R 3.5.0)
loo 2.1.0 2019-03-12 [1] Github (stan-dev/loo@b5a23b1)
magrittr 1.5 2014-11-22 [1] CRAN (R 3.5.0)
MASS 7.3-51.1 2018-11-01 [2] CRAN (R 3.5.2)
Matrix 1.2-15 2018-11-01 [2] CRAN (R 3.5.2)
matrixStats 0.54.0 2018-07-23 [1] CRAN (R 3.5.1)
mgcv 1.8-26 2018-11-21 [1] CRAN (R 3.5.1)
munsell 0.5.0 2018-06-12 [1] CRAN (R 3.5.1)
nlme 3.1-137 2018-04-07 [2] CRAN (R 3.5.2)
pillar 1.3.1 2018-12-15 [1] CRAN (R 3.5.1)
pkgbuild 1.0.2 2018-10-16 [1] CRAN (R 3.5.1)
pkgconfig 2.0.2 2018-08-16 [1] CRAN (R 3.5.1)
plyr 1.8.4 2016-06-08 [1] CRAN (R 3.5.0)
prettyunits 1.0.2 2015-07-13 [1] CRAN (R 3.5.1)
processx 3.2.1 2018-12-05 [1] CRAN (R 3.5.1)
ps 1.3.0 2018-12-21 [1] CRAN (R 3.5.2)
R6 2.4.0 2019-02-14 [1] CRAN (R 3.5.2)
RColorBrewer 1.1-2 2014-12-07 [1] CRAN (R 3.5.0)
Rcpp 1.0.0 2018-11-07 [1] CRAN (R 3.5.1)
RcppEigen 0.3.3.5.0 2018-11-24 [1] CRAN (R 3.5.1)
reshape2 1.4.3 2017-12-11 [1] CRAN (R 3.5.0)
rlang 0.3.1 2019-01-08 [1] CRAN (R 3.5.2)
rprojroot 1.3-2 2018-01-03 [1] CRAN (R 3.5.0)
rstan * 2.18.2 2018-11-07 [1] CRAN (R 3.5.1)
scales 1.0.0 2018-08-09 [1] CRAN (R 3.5.1)
StanHeaders * 2.18.0-1 2018-12-13 [1] CRAN (R 3.5.1)
stringi 1.3.1 2019-02-13 [1] CRAN (R 3.5.2)
stringr * 1.4.0 2019-02-10 [1] CRAN (R 3.5.2)
tibble * 2.0.1 2019-01-12 [1] CRAN (R 3.5.2)
utf8 1.1.4 2018-05-24 [1] CRAN (R 3.5.1)
viridisLite 0.3.0 2018-02-01 [1] CRAN (R 3.5.0)
withr 2.1.2.9000 2018-12-18 [1] Github (jimhester/withr@be57595)
[1] C:/Users/paulb/Documents/R/win-library/3.5
[2] C:/Program Files/R/R-3.5.2/library